This article includes a list of general references, but it lacks sufficient corresponding inline citations .(September 2015) |
In probability and statistics, the skewed generalized "t" distribution is a family of continuous probability distributions. The distribution was first introduced by Panayiotis Theodossiou [1] in 1998. The distribution has since been used in different applications. [2] [3] [4] [5] [6] [7] There are different parameterizations for the skewed generalized t distribution. [1] [5]
where is the beta function, is the location parameter, is the scale parameter, is the skewness parameter, and and are the parameters that control the kurtosis. and are not parameters, but functions of the other parameters that are used here to scale or shift the distribution appropriately to match the various parameterizations of this distribution.
In the original parameterization [1] of the skewed generalized t distribution,
and
These values for and yield a distribution with mean of if and a variance of if . In order for to take on this value however, it must be the case that . Similarly, for to equal the above value, .
The parameterization that yields the simplest functional form of the probability density function sets and . This gives a mean of
and a variance of
The parameter controls the skewness of the distribution. To see this, let denote the mode of the distribution, and
Since , the probability left of the mode, and therefore right of the mode as well, can equal any value in (0,1) depending on the value of . Thus the skewed generalized t distribution can be highly skewed as well as symmetric. If , then the distribution is negatively skewed. If , then the distribution is positively skewed. If , then the distribution is symmetric.
Finally, and control the kurtosis of the distribution. As and get smaller, the kurtosis increases [1] (i.e. becomes more leptokurtic). Large values of and yield a distribution that is more platykurtic.
Let be a random variable distributed with the skewed generalized t distribution. The moment (i.e. ), for , is:
The mean, for , is:
The variance (i.e. ), for , is:
The skewness (i.e. ), for , is:
The kurtosis (i.e. ), for , is:
Special and limiting cases of the skewed generalized t distribution include the skewed generalized error distribution, the generalized t distribution introduced by McDonald and Newey, [6] the skewed t proposed by Hansen, [8] the skewed Laplace distribution, the generalized error distribution (also known as the generalized normal distribution), a skewed normal distribution, the student t distribution, the skewed Cauchy distribution, the Laplace distribution, the uniform distribution, the normal distribution, and the Cauchy distribution. The graphic below, adapted from Hansen, McDonald, and Newey, [2] shows which parameters should be set to obtain some of the different special values of the skewed generalized t distribution.
The Skewed Generalized Error Distribution (SGED) has the pdf:
where
gives a mean of . Also
gives a variance of .
The generalized t-distribution (GT) has the pdf:
where
gives a variance of .
The skewed t-distribution (ST) has the pdf:
where
gives a mean of . Also
gives a variance of .
The skewed Laplace distribution (SLaplace) has the pdf:
where
gives a mean of . Also
gives a variance of .
The generalized error distribution (GED, also known as the generalized normal distribution) has the pdf:
where
gives a variance of .
The skewed normal distribution (SNormal) has the pdf:
where
gives a mean of . Also
gives a variance of .
The distribution should not be confused with the skew normal distribution or another asymmetric version. Indeed, the distribution here is a special case of a bi-Gaussian, whose left and right widths are proportional to and .
The Student's t-distribution (T) has the pdf:
was substituted.
The skewed cauchy distribution (SCauchy) has the pdf:
and was substituted.
The mean, variance, skewness, and kurtosis of the skewed Cauchy distribution are all undefined.
The Laplace distribution has the pdf:
was substituted.
The uniform distribution has the pdf:
Thus the standard uniform parameterization is obtained if , , and .
The normal distribution has the pdf:
where
gives a variance of .
The Cauchy distribution has the pdf:
was substituted.
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1⁄2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers; defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations.
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.
The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics.
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime or where one uses an arbitrary coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation.
In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In financial mathematics, tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.
In mathematics, the Fox–Wright function (also known as Fox–Wright Psi function, not to be confused with Wright Omega function) is a generalisation of the generalised hypergeometric function pFq(z) based on ideas of Charles Fox (1928) and E. Maitland Wright (1935):
In probability theory, an exponentially modified Gaussian distribution describes the sum of independent normal and exponential random variables. An exGaussian random variable Z may be expressed as Z = X + Y, where X and Y are independent, X is Gaussian with mean μ and variance σ2, and Y is exponential of rate λ. It has a characteristic positive skew from the exponential component.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.
In probability and statistics, the generalized beta distribution is a continuous probability distribution with four shape parameters, including more than thirty named distributions as limiting or special cases. It has been used in the modeling of income distribution, stock returns, as well as in regression analysis. The exponential generalized beta (EGB) distribution follows directly from the GB and generalizes other common distributions.
In probability theory and statistics, the asymmetric Laplace distribution (ALD) is a continuous probability distribution which is a generalization of the Laplace distribution. Just as the Laplace distribution consists of two exponential distributions of equal scale back-to-back about x = m, the asymmetric Laplace consists of two exponential distributions of unequal scale back to back about x = m, adjusted to assure continuity and normalization. The difference of two variates exponentially distributed with different means and rate parameters will be distributed according to the ALD. When the two rate parameters are equal, the difference will be distributed according to the Laplace distribution.
Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.