Parameters | (real), location parameter (real), scale parameter (real), degrees of freedom (shape) parameter | ||
---|---|---|---|
Support | |||
Mean | infinite | ||
Median | |||
Variance | infinite | ||
Skewness | does not exist | ||
Ex. kurtosis | does not exist | ||
MGF | does not exist |
In probability theory, a log-t distribution or log-Student t distribution is a probability distribution of a random variable whose logarithm is distributed in accordance with a Student's t-distribution. If X is a random variable with a Student's t-distribution, then Y = exp(X) has a log-t distribution; likewise, if Y has a log-t distribution, then X = log(Y) has a Student's t-distribution. [1]
The log-t distribution has the probability density function:
where is the location parameter of the underlying (non-standardized) Student's t-distribution, is the scale parameter of the underlying (non-standardized) Student's t-distribution, and is the number of degrees of freedom of the underlying Student's t-distribution. [1] If and then the underlying distribution is the standardized Student's t-distribution.
If then the distribution is a log-Cauchy distribution. [1] As approaches infinity, the distribution approaches a log-normal distribution. [1] [2] Although the log-normal distribution has finite moments, for any finite degrees of freedom, the mean and variance and all higher moments of the log-t distribution are infinite or do not exist. [1]
The log-t distribution is a special case of the generalized beta distribution of the second kind. [1] [3] [4] The log-t distribution is an example of a compound probability distribution between the lognormal distribution and inverse gamma distribution whereby the variance parameter of the lognormal distribution is a random variable distributed according to an inverse gamma distribution. [3] [5]
The log-t distribution has applications in finance. [3] For example, the distribution of stock market returns often shows fatter tails than a normal distribution, and thus tends to fit a Student's t-distribution better than a normal distribution. While the Black-Scholes model based on the log-normal distribution is often used to price stock options, option pricing formulas based on the log-t distribution can be a preferable alternative if the returns have fat tails. [6] The fact that the log-t distribution has infinite mean is a problem when using it to value options, but there are techniques to overcome that limitation, such as by truncating the probability density function at some arbitrary large value. [6] [7] [8]
The log-t distribution also has applications in hydrology and in analyzing data on cancer remission. [1] [9]
Analogous to the log-normal distribution, multivariate forms of the log-t distribution exist. In this case, the location parameter is replaced by a vector μ, the scale parameter is replaced by a matrix Σ. [1]
In statistics, a normal distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population. The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena and human activities.
In probability theory, a log-normaldistribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.
In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".
The scaled inverse chi-squared distribution is the distribution for x = 1/s2, where s2 is a sample mean of the squares of ν independent normal random variables that have mean 0 and inverse variance 1/σ2 = τ2. The distribution is therefore parametrised by the two quantities ν and τ2, referred to as the number of chi-squared degrees of freedom and the scaling parameter, respectively.
The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics.
In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).
The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. Whereas the central probability distribution describes how a test statistic t is distributed when the difference tested is null, the noncentral distribution describes how t is distributed when the null is false. This leads to its use in statistics, especially calculating statistical power. The noncentral t-distribution is also known as the singly noncentral t-distribution, and in addition to its primary use in statistical inference, is also used in robust modeling for data.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.
In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).
In finance, a volatility swap is a forward contract on the future realised volatility of a given underlying asset. Volatility swaps allow investors to trade the volatility of an asset directly, much as they would trade a price index. Its payoff at expiration is equal to
Tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
In the theory of stochastic processes, a part of the mathematical theory of probability, the variance gamma process (VG), also known as Laplace motion, is a Lévy process determined by a random time change. The process has finite moments distinguishing it from many Lévy processes. There is no diffusion component in the VG process and it is thus a pure jump process. The increments are independent and follow a variance-gamma distribution, which is a generalization of the Laplace distribution.
In probability theory, a log-Cauchy distribution is a probability distribution of a random variable whose logarithm is distributed in accordance with a Cauchy distribution. If X is a random variable with a Cauchy distribution, then Y = exp(X) has a log-Cauchy distribution; likewise, if Y has a log-Cauchy distribution, then X = log(Y) has a Cauchy distribution.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.
In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.
In statistics, the folded-t and half-t distributions are derived from Student's t-distribution by taking the absolute values of variates. This is analogous to the folded-normal and the half-normal statistical distributions being derived from the normal distribution.
An additive process, in probability theory, is a cadlag, continuous in probability stochastic process with independent increments. An additive process is the generalization of a Lévy process. An example of an additive process is a Brownian motion with a time-dependent drift. The additive process was introduced by Paul Lévy in 1937.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)