Kurtosis

Last updated

In probability theory and statistics, kurtosis (from Greek : κυρτός, kyrtos or kurtos, meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosis describes the shape of a probability distribution and there are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. Different measures of kurtosis may have different interpretations.

Contents

The standard measure of a distribution's kurtosis, originating with Karl Pearson, [1] is a scaled version of the fourth moment of the distribution. This number is related to the tails of the distribution, not its peak; [2] hence, the sometimes-seen characterization of kurtosis as "peakedness" is incorrect. For this measure, higher kurtosis corresponds to greater extremity of deviations (or outliers), and not the configuration of data near the mean.

The kurtosis of any univariate normal distribution is 3. It is common to compare the kurtosis of a distribution to this value. Distributions with kurtosis less than 3 are said to be platykurtic, although this does not imply the distribution is "flat-topped" as is sometimes stated. Rather, it means the distribution produces fewer and less extreme outliers than does the normal distribution. An example of a platykurtic distribution is the uniform distribution, which does not produce outliers. Distributions with kurtosis greater than 3 are said to be leptokurtic. An example of a leptokurtic distribution is the Laplace distribution, which has tails that asymptotically approach zero more slowly than a Gaussian, and therefore produces more outliers than the normal distribution. It is also common practice to use an adjusted version of Pearson's kurtosis, the excess kurtosis, which is the kurtosis minus 3, to provide the comparison to the standard normal distribution. Some authors use "kurtosis" by itself to refer to the excess kurtosis. For clarity and generality, however, this article follows the non-excess convention and explicitly indicates where excess kurtosis is meant.

Alternative measures of kurtosis are: the L-kurtosis, which is a scaled version of the fourth L-moment; measures based on four population or sample quantiles. [3] These are analogous to the alternative measures of skewness that are not based on ordinary moments. [3]

Pearson moments

The kurtosis is the fourth standardized moment, defined as

where μ4 is the fourth central moment and σ is the standard deviation. Several letters are used in the literature to denote the kurtosis. A very common choice is κ, which is fine as long as it is clear that it does not refer to a cumulant. Other choices include γ2, to be similar to the notation for skewness, although sometimes this is instead reserved for the excess kurtosis.

The kurtosis is bounded below by the squared skewness plus 1: [4] :432

where μ3 is the third central moment. The lower bound is realized by the Bernoulli distribution. There is no upper limit to the kurtosis of a general probability distribution, and it may be infinite.

A reason why some authors favor the excess kurtosis is that cumulants are extensive. Formulas related to the extensive property are more naturally expressed in terms of the excess kurtosis. For example, let X1, ..., Xn be independent random variables for which the fourth moment exists, and let Y be the random variable defined by the sum of the Xi. The excess kurtosis of Y is

where is the standard deviation of . In particular if all of the Xi have the same variance, then this simplifies to

The reason not to subtract 3 is that the bare fourth moment better generalizes to multivariate distributions, especially when independence is not assumed. The cokurtosis between pairs of variables is an order four tensor. For a bivariate normal distribution, the cokurtosis tensor has off-diagonal terms that are neither 0 nor 3 in general, so attempting to "correct" for an excess becomes confusing. It is true, however, that the joint cumulants of degree greater than two for any multivariate normal distribution are zero.

For two random variables, X and Y, not necessarily independent, the kurtosis of the sum, X + Y, is

Note that the binomial coefficients appear in the above equation.

Interpretation

The exact interpretation of the Pearson measure of kurtosis (or excess kurtosis) used to be disputed, but is now settled. As Westfall notes in 2014 [2] , "...its only unambiguous interpretation is in terms of tail extremity; i.e., either existing outliers (for the sample kurtosis) or propensity to produce outliers (for the kurtosis of a probability distribution)." The logic is simple: Kurtosis is the average (or expected value) of the standardized data raised to the fourth power. standardized values that are less than 1 (i.e., data within one standard deviation of the mean, where the "peak" would be), contribute virtually nothing to kurtosis, since raising a number that is less than 1 to the fourth power makes it closer to zero. The only data values (observed or observable) that contribute to kurtosis in any meaningful way are those outside the region of the peak; i.e., the outliers. Therefore, kurtosis measures outliers only; it measures nothing about the "peak".

Many incorrect interpretations of kurtosis that involve notions of peakedness have been given. One is that kurtosis measures both the "peakedness" of the distribution and the heaviness of its tail. [5] Various other incorrect interpretations have been suggested, such as "lack of shoulders" (where the "shoulder" is defined vaguely as the area between the peak and the tail, or more specifically as the area about one standard deviation from the mean) or "bimodality". [6] Balanda and MacGillivray assert that the standard definition of kurtosis "is a poor measure of the kurtosis, peakedness, or tail weight of a distribution" [5] :114 and instead propose to "define kurtosis vaguely as the location- and scale-free movement of probability mass from the shoulders of a distribution into its center and tails". [5]

Moors' interpretation

In 1986 Moors gave an interpretation of kurtosis. [7] Let

where X is a random variable, μ is the mean and σ is the standard deviation.

Now by definition of the kurtosis , and by the well-known identity

.

The kurtosis can now be seen as a measure of the dispersion of Z2 around its expectation. Alternatively it can be seen to be a measure of the dispersion of Z around +1 and −1. κ attains its minimal value in a symmetric two-point distribution. In terms of the original variable X, the kurtosis is a measure of the dispersion of X around the two values μ ± σ.

High values of κ arise in two circumstances:

Excess kurtosis

The excess kurtosis is defined as kurtosis minus 3. There are 3 distinct regimes as described below.

Mesokurtic

Distributions with zero excess kurtosis are called mesokurtic, or mesokurtotic. The most prominent example of a mesokurtic distribution is the normal distribution family, regardless of the values of its parameters. A few other well-known distributions can be mesokurtic, depending on parameter values: for example, the binomial distribution is mesokurtic for .

Leptokurtic

A distribution with positive excess kurtosis is called leptokurtic, or leptokurtotic. "Lepto-" means "slender". [8] In terms of shape, a leptokurtic distribution has fatter tails . Examples of leptokurtic distributions include the Student's t-distribution, Rayleigh distribution, Laplace distribution, exponential distribution, Poisson distribution and the logistic distribution. Such distributions are sometimes termed super-Gaussian. [9]

Platykurtic

The coin toss is the most platykurtic distribution 1909 US Penny.jpg
The coin toss is the most platykurtic distribution

A distribution with negative excess kurtosis is called platykurtic, or platykurtotic. "Platy-" means "broad". [10] In terms of shape, a platykurtic distribution has thinner tails. Examples of platykurtic distributions include the continuous and discrete uniform distributions, and the raised cosine distribution. The most platykurtic distribution of all is the Bernoulli distribution with p = 1/2 (for example the number of times one obtains "heads" when flipping a coin once, a coin toss), for which the excess kurtosis is −2. Such distributions are sometimes termed sub-Gaussian distribution , originally proposed by Jean-Pierre Kahane [11] and further described by Buldygin and Kozachenko. [12]

Graphical examples

The Pearson type VII family

pdf for the Pearson type VII distribution with excess kurtosis of infinity (red); 2 (blue); and 0 (black) Pearson type VII distribution PDF.svg
pdf for the Pearson type VII distribution with excess kurtosis of infinity (red); 2 (blue); and 0 (black)
log-pdf for the Pearson type VII distribution with excess kurtosis of infinity (red); 2 (blue); 1, 1/2, 1/4, 1/8, and 1/16 (gray); and 0 (black) Pearson type VII distribution log-PDF.svg
log-pdf for the Pearson type VII distribution with excess kurtosis of infinity (red); 2 (blue); 1, 1/2, 1/4, 1/8, and 1/16 (gray); and 0 (black)

The effects of kurtosis are illustrated using a parametric family of distributions whose kurtosis can be adjusted while their lower-order moments and cumulants remain constant. Consider the Pearson type VII family, which is a special case of the Pearson type IV family restricted to symmetric densities. The probability density function is given by

where a is a scale parameter and m is a shape parameter.

All densities in this family are symmetric. The kth moment exists provided m > (k + 1)/2. For the kurtosis to exist, we require m > 5/2. Then the mean and skewness exist and are both identically zero. Setting a2 = 2m  3 makes the variance equal to unity. Then the only free parameter is m, which controls the fourth moment (and cumulant) and hence the kurtosis. One can reparameterize with , where is the excess kurtosis as defined above. This yields a one-parameter leptokurtic family with zero mean, unit variance, zero skewness, and arbitrary non-negative excess kurtosis. The reparameterized density is

In the limit as one obtains the density

which is shown as the red curve in the images on the right.

In the other direction as one obtains the standard normal density as the limiting distribution, shown as the black curve.

In the images on the right, the blue curve represents the density with excess kurtosis of 2. The top image shows that leptokurtic densities in this family have a higher peak than the mesokurtic normal density, although this conclusion is only valid for this select family of distributions. The comparatively fatter tails of the leptokurtic densities are illustrated in the second image, which plots the natural logarithm of the Pearson type VII densities: the black curve is the logarithm of the standard normal density, which is a parabola. One can see that the normal density allocates little probability mass to the regions far from the mean ("has thin tails"), compared with the blue curve of the leptokurtic Pearson type VII density with excess kurtosis of 2. Between the blue curve and the black are other Pearson type VII densities with γ2 = 1, 1/2, 1/4, 1/8, and 1/16. The red curve again shows the upper limit of the Pearson type VII family, with (which, strictly speaking, means that the fourth moment does not exist). The red curve decreases the slowest as one moves outward from the origin ("has fat tails").

Other well-known distributions

Probability density functions for selected distributions with mean 0, variance 1 and different excess kurtosis Standard symmetric pdfs.svg
Probability density functions for selected distributions with mean 0, variance 1 and different excess kurtosis
Logarithms of probability density functions for selected distributions with mean 0, variance 1 and different excess kurtosis Standard symmetric pdfs logscale.svg
Logarithms of probability density functions for selected distributions with mean 0, variance 1 and different excess kurtosis

Several well-known, unimodal, and symmetric distributions from different parametric families are compared here. Each has a mean and skewness of zero. The parameters have been chosen to result in a variance equal to 1 in each case. The images on the right show curves for the following seven densities, on a linear scale and logarithmic scale:

Note that in these cases the platykurtic densities have bounded support, whereas the densities with positive or zero excess kurtosis are supported on the whole real line.

One cannot infer that high or low kurtosis distributions have the characteristics indicated by these examples. There exist platykurtic densities with infinite support,

and there exist leptokurtic densities with finite support.

Also, there exist platykurtic densities with infinite peakedness,

and there exist leptokurtic densities that appear flat-topped,

Sample kurtosis

Definitions

A natural but biased estimator

For a sample of n values, a method of moments estimator of the population excess kurtosis can be defined as

where m4 is the fourth sample moment about the mean, m2 is the second sample moment about the mean (that is, the sample variance), xi is the ith value, and is the sample mean.

This formula has the simpler representation,

where the values are the standardized data values using the standard deviation defined using n rather than n  1 in the denominator.

For example, suppose the data values are 0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999.

Then the values are −0.239, −0.225, −0.221, −0.234, −0.230, −0.225, −0.239, −0.230, −0.234, −0.225, −0.230, −0.239, −0.230, −0.230, −0.225, −0.230, −0.216, −0.230, −0.225, 4.359

and the values are 0.003, 0.003, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.002, 0.003, 0.003, 360.976.

The average of these values is 18.05 and the excess kurtosis is thus 18.05  3 = 15.05. This example makes it clear that data near the "middle" or "peak" of the distribution do not contribute to the kurtosis statistic, hence kurtosis does not measure "peakedness". It is simply a measure of the outlier, 999 in this example.

Standard unbiased estimator

Given a sub-set of samples from a population, the sample excess kurtosis above is a biased estimator of the population excess kurtosis. An alternative estimator of the population excess kurtosis, which is unbiased in random samples of a normal distribution, is defined as follows: [3]

where k4 is the unique symmetric unbiased estimator of the fourth cumulant, k2 is the unbiased estimate of the second cumulant (identical to the unbiased estimate of the sample variance), m4 is the fourth sample moment about the mean, m2 is the second sample moment about the mean, xi is the ith value, and is the sample mean. This adjusted Fisher–Pearson standardized moment coefficient is the version found in Excel and several statistical packages including Minitab, SAS and SPSS. [13]

Unfortunately, in nonnormal samples is itself generally biased.

Upper bound

An upper bound for the sample kurtosis of n (n > 2) real numbers is [14]

where is the corresponding sample skewness.

Variance under normality

The variance of the sample kurtosis of a sample of size n from the normal distribution is [15]

Stated differently, under the assumption that the underlying random variable is normally distributed, it can be shown that . [16] :Page number needed

Applications

The sample kurtosis is a useful measure of whether there is a problem with outliers in a data set. Larger kurtosis indicates a more serious outlier problem, and may lead the researcher to choose alternative statistical methods.

D'Agostino's K-squared test is a goodness-of-fit normality test based on a combination of the sample skewness and sample kurtosis, as is the Jarque–Bera test for normality.

For non-normal samples, the variance of the sample variance depends on the kurtosis; for details, please see variance.

Pearson's definition of kurtosis is used as an indicator of intermittency in turbulence. [17] It is also used in magnetic resonance imaging to quantify non-Gaussian diffusion. [18]

A concrete example is the following lemma by He, Zhang, and Zhang: [19] Assume a random variable has expectation , variance and kurtosis . Assume we sample many independent copies. Then

.

This shows that with many samples, we will see one that is above the expectation with probability at least . In other words: If the kurtosis is large, we might see a lot values either all below or above the mean.

Kurtosis convergence

Applying band-pass filters to digital images, kurtosis values tend to be uniform, independent of the range of the filter. This behavior, termed kurtosis convergence, can be used to detect image splicing in forensic analysis. [20]

Other measures

A different measure of "kurtosis" is provided by using L-moments instead of the ordinary moments. [21] [22]

See also

Related Research Articles

Normal distribution Probability distribution

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

Standard deviation Measure of the amount of variation or dispersion of a set of values

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

Skewness measure of the asymmetry of random variables

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

Variance Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

Multivariate normal distribution Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

Log-normal distribution Probability distribution

In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.

Beta distribution Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.

Rayleigh distribution Probability distribution

In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom.

In mathematics, the moments of a function are quantitative measures related to the shape of the function's graph. If the function represents mass, then the first moment is the center of the mass, and the second moment is the rotational inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

Multimodal distribution Probability distribution whose density has two or more distinct local maxima

In statistics, a bimodaldistribution is a probability distribution with two different modes, which may also be referred to as a bimodal distribution. These appear as distinct peaks in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form bimodal distributions.

In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

Pearson distribution Family of continuous probability distributions

The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics.

Rice distribution Probability distribution

In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.

Half-normal distribution Probability distribution

In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.

In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution. The test is named after Carlos Jarque and Anil K. Bera. The test statistic is always nonnegative. If it is far from zero, it signals the data do not have a normal distribution.

Exponentially modified Gaussian distribution describes the sum of independent normal and exponential random variables

In probability theory, an exponentially modified Gaussian distribution describes the sum of independent normal and exponential random variables. An exGaussian random variable Z may be expressed as Z = X + Y, where X and Y are independent, X is Gaussian with mean μ and variance σ2, and Y is exponential of rate λ. It has a characteristic positive skew from the exponential component.

References

  1. Pearson, Karl (1905), "Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson. A Rejoinder" [The Error Law and its Generalizations by Fechner and Pearson. A Rejoinder], Biometrika , 4 (1–2): 169–212, doi:10.1093/biomet/4.1-2.169, JSTOR   2331536
  2. 1 2 Westfall, Peter H. (2014), "Kurtosis as Peakedness, 1905 - 2014. R.I.P.", The American Statistician , 68 (3): 191–195, doi:10.1080/00031305.2014.917055, PMC   4321753 , PMID   25678714
  3. 1 2 3 Joanes, Derrick N.; Gill, Christine A. (1998), "Comparing measures of sample skewness and kurtosis", Journal of the Royal Statistical Society, Series D , 47 (1): 183–189, doi:10.1111/1467-9884.00122, JSTOR   2988433
  4. Pearson, Karl (1916), "Mathematical Contributions to the Theory of Evolution. — XIX. Second Supplement to a Memoir on Skew Variation.", Philosophical Transactions of the Royal Society of London A , 216 (546): 429–457, Bibcode:1916RSPTA.216..429P, doi:10.1098/rsta.1916.0009, JSTOR   91092
  5. 1 2 3 Balanda, Kevin P.; MacGillivray, Helen L. (1988), "Kurtosis: A Critical Review", The American Statistician, 42 (2): 111–119, doi:10.2307/2684482, JSTOR   2684482
  6. Darlington, Richard B. (1970), "Is Kurtosis Really 'Peakedness'?", The American Statistician, 24 (2): 19–22, doi:10.1080/00031305.1970.10478885, JSTOR   2681925
  7. Moors, J. J. A. (1986), "The meaning of kurtosis: Darlington reexamined", The American Statistician, 40 (4): 283–284, doi:10.1080/00031305.1986.10475415, JSTOR   2684603
  8. "Lepto-".
  9. Benveniste, Albert; Goursat, Maurice; Ruget, Gabriel (1980), "Robust identification of a nonminimum phase system: Blind adjustment of a linear equalizer in data communications", IEEE Transactions on Automatic Control, 25 (3): 385–399, doi:10.1109/tac.1980.1102343
  10. http://www.yourdictionary.com/platy-prefix
  11. Kahane, Jean-Pierre (1960), "Propriétés locales des fonctions à séries de Fourier aléatoires" [Local properties of functions in terms of random Fourier series], Studia Mathematica (in French), 19 (1): 1–25, doi:10.4064/sm-19-1-1-25
  12. Buldygin, Valerii V.; Kozachenko, Yuriy V. (1980), "Sub-Gaussian random variables", Ukrainian Mathematical Journal, 32 (6): 483–489, doi:10.1007/BF01087176, S2CID   121640142
  13. Doane DP, Seward LE (2011) J Stat Educ 19 (2)
  14. Sharma, Rajesh; Bhandari, Rajeev K. (2015), "Skewness, kurtosis and Newton's inequality", Rocky Mountain Journal of Mathematics , 45 (5): 1639–1643, doi:10.1216/RMJ-2015-45-5-1639, S2CID   88513237
  15. Fisher, Ronald A. (1930), "The Moments of the Distribution for Normal Samples of Measures of Departure from Normality", Proceedings of the Royal Society A , 130 (812): 16–28, Bibcode:1930RSPSA.130...16F, doi:10.1098/rspa.1930.0185, JSTOR   95586
  16. Kendall, Maurice G.; Stuart, Alan (1969), The Advanced Theory of Statistics, Volume 1: Distribution Theory (3rd ed.), London, UK: Charles Griffin & Company Limited, ISBN   0-85264-141-9
  17. Sandborn, Virgil A. (1959), "Measurements of Intermittency of Turbulent Motion in a Boundary Layer", Journal of Fluid Mechanics , 6 (2): 221–240, Bibcode:1959JFM.....6..221S, doi:10.1017/S0022112059000581
  18. Jensen, J.; Helpern, J.; Ramani, A.; Lu, H.; Kaczynski, K. (19 May 2005). "Diffusional kurtosis imaging: The quantification of non‐Gaussian water diffusion by means of magnetic resonance imaging". Magn Reson Med. 53 (6): 1432–1440. doi:10.1002/mrm.20508. PMID   15906300. S2CID   11865594.
  19. He, S.; Zhang, J.; Zhang, S. (2010). "Bounding probability of small deviation: A fourth moment approach". Mathematics of Operations Research. 35 (1): 208–232. doi:10.1287/moor.1090.0438. S2CID   11298475.
  20. Pan, Xunyu; Zhang, Xing; Lyu, Siwei (2012), "Exposing Image Splicing with Inconsistent Local Noise Variances", 2012 IEEE International Conference on Computational Photography (ICCP), 28-29 April 2012; Seattle, WA, USA: IEEE, doi:10.1109/ICCPhot.2012.6215223, S2CID   14386924 CS1 maint: location (link)
  21. Hosking, Jonathan R. M. (1992), "Moments or L moments? An example comparing two measures of distributional shape", The American Statistician, 46 (3): 186–189, doi:10.1080/00031305.1992.10475880, JSTOR   2685210
  22. Hosking, Jonathan R. M. (2006), "On the characterization of distributions by their L-moments", Journal of Statistical Planning and Inference , 136 (1): 193–198, doi:10.1016/j.jspi.2004.06.004

Further reading