In statistics, a multimodaldistribution is a probability distribution with more than one mode (i.e., more than one local peak of the distribution). These appear as distinct peaks (local maxima) in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form multimodal distributions. Among univariate analyses, multimodal distributions are commonly bimodal.[ citation needed ]
When the two modes are unequal the larger mode is known as the major mode and the other as the minor mode. The least frequent value between the modes is known as the antimode. The difference between the major and minor modes is known as the amplitude. In time series the major mode is called the acrophase and the antimode the batiphase.[ citation needed ]
Galtung introduced a classification system (AJUS) for distributions: [1]
This classification has since been modified slightly:
Under this classification bimodal distributions are classified as type S or U.
Bimodal distributions occur both in mathematics and in the natural sciences.
Important bimodal distributions include the arcsine distribution and the beta distribution (iff both parameters a and b are less than 1). Others include the U-quadratic distribution.
The ratio of two normal distributions is also bimodally distributed. Let
where a and b are constant and x and y are distributed as normal variables with a mean of 0 and a standard deviation of 1. R has a known density that can be expressed as a confluent hypergeometric function. [2]
The distribution of the reciprocal of a t distributed random variable is bimodal when the degrees of freedom are more than one. Similarly the reciprocal of a normally distributed variable is also bimodally distributed.
A t statistic generated from data set drawn from a Cauchy distribution is bimodal. [3]
Examples of variables with bimodal distributions include the time between eruptions of certain geysers, the color of galaxies, the size of worker weaver ants, the age of incidence of Hodgkin's lymphoma, the speed of inactivation of the drug isoniazid in US adults, the absolute magnitude of novae, and the circadian activity patterns of those crepuscular animals that are active both in morning and evening twilight. In fishery science multimodal length distributions reflect the different year classes and can thus be used for age distribution- and growth estimates of the fish population. [4] Sediments are usually distributed in a bimodal fashion. When sampling mining galleries crossing either the host rock and the mineralized veins, the distribution of geochemical variables would be bimodal. Bimodal distributions are also seen in traffic analysis, where traffic peaks in during the AM rush hour and then again in the PM rush hour. This phenomenon is also seen in daily water distribution, as water demand, in the form of showers, cooking, and toilet use, generally peak in the morning and evening periods.
In econometric models, the parameters may be bimodally distributed. [5]
A bimodal distribution commonly arises as a mixture of two different unimodal distributions (i.e. distributions having only one mode). In other words, the bimodally distributed random variable X is defined as with probability or with probability where Y and Z are unimodal random variables and is a mixture coefficient.
Mixtures with two distinct components need not be bimodal and two component mixtures of unimodal component densities can have more than two modes. There is no immediate connection between the number of components in a mixture and the number of modes of the resulting density.
Bimodal distributions, despite their frequent occurrence in data sets, have only rarely been studied[ citation needed ]. This may be because of the difficulties in estimating their parameters either with frequentist or Bayesian methods. Among those that have been studied are
Bimodality also naturally arises in the cusp catastrophe distribution.
In biology five factors are known to contribute to bimodal distributions of population sizes[ citation needed ]:
The bimodal distribution of sizes of weaver ant workers arises due to existence of two distinct classes of workers, namely major workers and minor workers. [10]
The distribution of fitness effects of mutations for both whole genomes [11] [12] and individual genes [13] is also frequently found to be bimodal with most mutations being either neutral or lethal with relatively few having intermediate effect.
A mixture of two unimodal distributions with differing means is not necessarily bimodal. The combined distribution of heights of men and women is sometimes used as an example of a bimodal distribution, but in fact the difference in mean heights of men and women is too small relative to their standard deviations to produce bimodality when the two distribution curves are combined. [14]
Bimodal distributions have the peculiar property that – unlike the unimodal distributions – the mean may be a more robust sample estimator than the median. [15] This is clearly the case when the distribution is U-shaped like the arcsine distribution. It may not be true when the distribution has one or more long tails.
Let
where gi is a probability distribution and p is the mixing parameter.
The moments of f(x) are [16]
where
and Si and Ki are the skewness and kurtosis of the ith distribution.
It is not uncommon to encounter situations where an investigator believes that the data comes from a mixture of two normal distributions. Because of this, this mixture has been studied in some detail. [17]
A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation. [14] Estimates of the parameters is simplified if the variances can be assumed to be equal (the homoscedastic case).
If the means of the two normal distributions are equal, then the combined distribution is unimodal. Conditions for unimodality of the combined distribution were derived by Eisenberger. [18] Necessary and sufficient conditions for a mixture of normal distributions to be bimodal have been identified by Ray and Lindsay. [19]
A mixture of two approximately equal mass normal distributions has a negative kurtosis since the two modes on either side of the center of mass effectively reduces the tails of the distribution.
A mixture of two normal distributions with highly unequal mass has a positive kurtosis since the smaller distribution lengthens the tail of the more dominant normal distribution.
Mixtures of other distributions require additional parameters to be estimated.
Bimodal distributions are a commonly used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution.
Although several have been suggested, there is no presently generally agreed summary statistic (or set of statistics) to quantify the parameters of a general bimodal distribution. For a mixture of two normal distributions the means and standard deviations along with the mixing parameter (the weight for the combination) are usually used – a total of five parameters.
A statistic that may be useful is Ashman's D: [22]
where μ1, μ2 are the means and σ1, σ2 are the standard deviations.
For a mixture of two normal distributions D > 2 is required for a clean separation of the distributions.
This measure is a weighted average of the degree of agreement the frequency distribution. [23] A ranges from -1 (perfect bimodality) to +1 (perfect unimodality). It is defined as
where U is the unimodality of the distribution, S the number of categories that have nonzero frequencies and K the total number of categories.
The value of U is 1 if the distribution has any of the three following characteristics:
With distributions other than these the data must be divided into 'layers'. Within a layer the responses are either equal or zero. The categories do not have to be contiguous. A value for A for each layer (Ai) is calculated and a weighted average for the distribution is determined. The weights (wi) for each layer are the number of responses in that layer. In symbols
A uniform distribution has A = 0: when all the responses fall into one category A = +1.
One theoretical problem with this index is that it assumes that the intervals are equally spaced. This may limit its applicability.
This index assumes that the distribution is a mixture of two normal distributions with means (μ1 and μ2) and standard deviations (σ1 and σ2): [24]
Sarle's bimodality coefficient b is [25]
where γ is the skewness and κ is the kurtosis. The kurtosis is here defined to be the standardised fourth moment around the mean. The value of b lies between 0 and 1. [26] The logic behind this coefficient is that a bimodal distribution with light tails will have very low kurtosis, an asymmetric character, or both – all of which increase this coefficient.
The formula for a finite sample is [27]
where n is the number of items in the sample, g is the sample skewness and k is the sample excess kurtosis.
The value of b for the uniform distribution is 5/9. This is also its value for the exponential distribution. Values greater than 5/9 may indicate a bimodal or multimodal distribution, though corresponding values can also result for heavily skewed unimodal distributions. [28] The maximum value (1.0) is reached only by a Bernoulli distribution with only two distinct values or the sum of two different Dirac delta functions (a bi-delta distribution).
The distribution of this statistic is unknown. It is related to a statistic proposed earlier by Pearson – the difference between the kurtosis and the square of the skewness (vide infra).
This is defined as [24]
where A1 is the amplitude of the smaller peak and Aan is the amplitude of the antimode.
AB is always < 1. Larger values indicate more distinct peaks.
This is the ratio of the left and right peaks. [24] Mathematically
where Al and Ar are the amplitudes of the left and right peaks respectively.
This parameter (B) is due to Wilcock. [29]
where Al and Ar are the amplitudes of the left and right peaks respectively and Pi is the logarithm taken to the base 2 of the proportion of the distribution in the ith interval. The maximal value of the ΣP is 1 but the value of B may be greater than this.
To use this index, the log of the values are taken. The data is then divided into interval of width Φ whose value is log 2. The width of the peaks are taken to be four times 1/4Φ centered on their maximum values.
The bimodality index proposed by Wang et al assumes that the distribution is a sum of two normal distributions with equal variances but differing means. [30] It is defined as follows:
where μ1, μ2 are the means and σ is the common standard deviation.
where p is the mixing parameter.
A different bimodality index has been proposed by Sturrock. [31]
This index (B) is defined as
When m = 2 and γ is uniformly distributed, B is exponentially distributed. [32]
This statistic is a form of periodogram. It suffers from the usual problems of estimation and spectral leakage common to this form of statistic.
Another bimodality index has been proposed by de Michele and Accatino. [33] Their index (B) is
where μ is the arithmetic mean of the sample and
where mi is number of data points in the ith bin, xi is the center of the ith bin and L is the number of bins.
The authors suggested a cut off value of 0.1 for B to distinguish between a bimodal (B > 0.1)and unimodal (B < 0.1) distribution. No statistical justification was offered for this value.
A further index (B) has been proposed by Sambrook Smith et al [34]
where p1 and p2 are the proportion contained in the primary (that with the greater amplitude) and secondary (that with the lesser amplitude) mode and φ1 and φ2 are the φ-sizes of the primary and secondary mode. The φ-size is defined as minus one times the log of the data size taken to the base 2. This transformation is commonly used in the study of sediments.
The authors recommended a cut off value of 1.5 with B being greater than 1.5 for a bimodal distribution and less than 1.5 for a unimodal distribution. No statistical justification for this value was given.
Otsu's method for finding a threshold for separation between two modes relies on minimizing the quantity
where ni is the number of data points in the ith subpopulation, σi2 is the variance of the ith subpopulation, m is the total size of the sample and σ2 is the sample variance. Some researchers (particularly in the field of digital image processing) have applied this quantity more broadly as an index for detecting bimodality, with a small value indicating a more bimodal distribution. [35]
A number of tests are available to determine if a data set is distributed in a bimodal (or multimodal) fashion.
In the study of sediments, particle size is frequently bimodal. Empirically, it has been found useful to plot the frequency against the log( size ) of the particles. [36] [37] This usually gives a clear separation of the particles into a bimodal distribution. In geological applications the logarithm is normally taken to the base 2. The log transformed values are referred to as phi (Φ) units. This system is known as the Krumbein (or phi) scale.
An alternative method is to plot the log of the particle size against the cumulative frequency. This graph will usually consist two reasonably straight lines with a connecting line corresponding to the antimode.
Approximate values for several statistics can be derived from the graphic plots. [36]
where Mean is the mean, StdDev is the standard deviation, Skew is the skewness, Kurt is the kurtosis and φx is the value of the variate φ at the xth percentage of the distribution.
Pearson in 1894 was the first to devise a procedure to test whether a distribution could be resolved into two normal distributions. [38] This method required the solution of a ninth order polynomial. In a subsequent paper Pearson reported that for any distribution skewness2 + 1 < kurtosis. [26] Later Pearson showed that [39]
where b2 is the kurtosis and b1 is the square of the skewness. Equality holds only for the two point Bernoulli distribution or the sum of two different Dirac delta functions. These are the most extreme cases of bimodality possible. The kurtosis in both these cases is 1. Since they are both symmetrical their skewness is 0 and the difference is 1.
Baker proposed a transformation to convert a bimodal to a unimodal distribution. [40]
Several tests of unimodality versus bimodality have been proposed: Haldane suggested one based on second central differences. [41] Larkin later introduced a test based on the F test; [42] Benett created one based on Fisher's G test. [43] Tokeshi has proposed a fourth test. [44] [45] A test based on a likelihood ratio has been proposed by Holzmann and Vollmer. [20]
A method based on the score and Wald tests has been proposed. [46] This method can distinguish between unimodal and bimodal distributions when the underlying distributions are known.
Statistical tests for the antimode are known. [47]
Otsu's method is commonly employed in computer graphics to determine the optimal separation between two distributions.
To test if a distribution is other than unimodal, several additional tests have been devised: the bandwidth test, [48] the dip test, [49] the excess mass test, [50] the MAP test, [51] the mode existence test, [52] the runt test, [53] [54] the span test, [55] and the saddle test.
An implementation of the dip test is available for the R programming language. [56] The p-values for the dip statistic values range between 0 and 1. P-values less than 0.05 indicate significant multimodality and p-values greater than 0.05 but less than 0.10 suggest multimodality with marginal significance. [57]
Silverman introduced a bootstrap method for the number of modes. [48] The test uses a fixed bandwidth which reduces the power of the test and its interpretability. Under smoothed densities may have an excessive number of modes whose count during bootstrapping is unstable.
Bajgier and Aggarwal have proposed a test based on the kurtosis of the distribution. [58]
Additional tests are available for a number of special cases:
A study of a mixture density of two normal distributions data found that separation into the two normal distributions was difficult unless the means were separated by 4–6 standard deviations. [59]
In astronomy the Kernel Mean Matching algorithm is used to decide if a data set belongs to a single normal distribution or to a mixture of two normal distributions.
This distribution is bimodal for certain values of is parameters. A test for these values has been described. [60]
Assuming that the distribution is known to be bimodal or has been shown to be bimodal by one or more of the tests above, it is frequently desirable to fit a curve to the data. This may be difficult.
Bayesian methods may be useful in difficult cases.
A package for R is available for testing for bimodality. [61] This package assumes that the data are distributed as a sum of two normal distributions. If this assumption is not correct the results may not be reliable. It also includes functions for fitting a sum of two normal distributions to the data.
Assuming that the distribution is a mixture of two normal distributions then the expectation-maximization algorithm may be used to determine the parameters. Several programmes are available for this including Cluster, [62] and the R package nor1mix. [63]
The mixtools package available for R can test for and estimate the parameters of a number of different distributions. [64] A package for a mixture of two right-tailed gamma distributions is available. [65]
Several other packages for R are available to fit mixture models; these include flexmix, [66] mcclust, [67] agrmt, [68] and mixdist. [69]
The statistical programming language SAS can also fit a variety of mixed distributions with the PROC FREQ procedure.
In Python, the package Scikit-learn contains a tool for mixture modeling [70]
The CumFreqA [71] program for the fitting of composite probability distributions to a data set (X) can divide the set into two parts with a different distribution. The figure shows an example of a double generalized mirrored Gumbel distribution as in distribution fitting with cumulative distribution function (CDF) equations:
X < 8.10 : CDF = 1 - exp[-exp{-(0.092X^0.01+935)}] X > 8.10 : CDF = 1 - exp[-exp{-(-0.0039X^2.79+1.05)}]
In probability theory and statistics, kurtosis is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosis describes a particular aspect of a probability distribution. There are different ways to quantify kurtosis for a theoretical distribution, and there are corresponding ways of estimating it using a sample from a population. Different measures of kurtosis may have different interpretations.
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may be random vectors, in which case the mixture distribution is a multivariate distribution.
In statistics, the mode is the value that appears most often in a set of data values. If X is a discrete random variable, the mode is the value x at which the probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled.
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range [0, 2π). A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the opposite ends of the range, and the division of the range could notionally be made at any point.
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object.
The folded normal distribution is a probability distribution related to the normal distribution. Given a normally distributed random variable X with mean μ and variance σ2, the random variable Y = |X| has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called "folded" because probability mass to the left of x = 0 is folded over by taking the absolute value. In the physics of heat conduction, the folded normal distribution is a fundamental solution of the heat equation on the half space; it corresponds to having a perfect insulator on a hyperplane through the origin.
In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively.
In probability and statistics, the truncated normal distribution is the probability distribution derived from a multivariate normally distributed random variable conditioned to taking values in a box, i.e.: the values of each component of the random variable are conditioned to being bounded from either below or above. The truncated normal distribution has wide applications in statistics and econometrics.
The Birnbaum–Saunders distribution, also known as the fatigue life distribution, is a probability distribution used extensively in reliability applications to model failure times. There are several alternative formulations of this distribution in the literature. It is named after Z. W. Birnbaum and S. C. Saunders.
In probability theory and statistics, the skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for non-zero skewness.
In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution. The test is named after Carlos Jarque and Anil K. Bera. The test statistic is always nonnegative. If it is far from zero, it signals the data do not have a normal distribution.
In probability theory, the rectified Gaussian distribution is a modification of the Gaussian distribution when its negative elements are reset to 0. It is essentially a mixture of a discrete distribution and a continuous distribution as a result of censoring.
In probability theory, an exponentially modified Gaussian distribution describes the sum of independent normal and exponential random variables. An exGaussian random variable Z may be expressed as Z = X + Y, where X and Y are independent, X is Gaussian with mean μ and variance σ2, and Y is exponential of rate λ. It has a characteristic positive skew from the exponential component.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.
In probability and statistics, the skewed generalized "t" distribution is a family of continuous probability distributions. The distribution was first introduced by Panayiotis Theodossiou in 1998. The distribution has since been used in different applications. There are different parameterizations for the skewed generalized t distribution.
In probability theory and statistics, the asymmetric Laplace distribution (ALD) is a continuous probability distribution which is a generalization of the Laplace distribution. Just as the Laplace distribution consists of two exponential distributions of equal scale back-to-back about x = m, the asymmetric Laplace consists of two exponential distributions of unequal scale back to back about x = m, adjusted to assure continuity and normalization. The difference of two variates exponentially distributed with different means and rate parameters will be distributed according to the ALD. When the two rate parameters are equal, the difference will be distributed according to the Laplace distribution.
{{cite web}}
: |last=
has generic name (help){{cite web}}
: CS1 maint: archived copy as title (link)