Qualitative variation

Last updated

An index of qualitative variation (IQV) is a measure of statistical dispersion in nominal distributions. Examples include the variation ratio or the information entropy.

Contents

Properties

There are several types of indices used for the analysis of nominal data. Several are standard statistics that are used elsewhere - range, standard deviation, variance, mean deviation, coefficient of variation, median absolute deviation, interquartile range and quartile deviation.

In addition to these several statistics have been developed with nominal data in mind. A number have been summarized and devised by Wilcox ( Wilcox 1967 ), ( Wilcox 1973 ), who requires the following standardization properties to be satisfied:

In particular, the value of these standardized indices does not depend on the number of categories or number of samples.

For any index, the closer to uniform the distribution, the larger the variance, and the larger the differences in frequencies across categories, the smaller the variance.

Indices of qualitative variation are then analogous to information entropy, which is minimized when all cases belong to a single category and maximized in a uniform distribution. Indeed, information entropy can be used as an index of qualitative variation.

One characterization of a particular index of qualitative variation (IQV) is as a ratio of observed differences to maximum differences.

Wilcox's indexes

Wilcox gives a number of formulae for various indices of QV ( Wilcox 1973 ), the first, which he designates DM for "Deviation from the Mode", is a standardized form of the variation ratio, and is analogous to variance as deviation from the mean.

ModVR

The formula for the variation around the mode (ModVR) is derived as follows:

where fm is the modal frequency, K is the number of categories and fi is the frequency of the ith group.

This can be simplified to

where N is the total size of the sample.

Freeman's index (or variation ratio) is [2]

This is related to M as follows:

The ModVR is defined as

where v is Freeman's index.

Low values of ModVR correspond to small amount of variation and high values to larger amounts of variation.

When K is large, ModVR is approximately equal to Freeman's index v.

RanVR

This is based on the range around the mode. It is defined to be

where fm is the modal frequency and fl is the lowest frequency.

AvDev

This is an analog of the mean deviation. It is defined as the arithmetic mean of the absolute differences of each value from the mean.

MNDif

This is an analog of the mean difference - the average of the differences of all the possible pairs of variate values, taken regardless of sign. The mean difference differs from the mean and standard deviation because it is dependent on the spread of the variate values among themselves and not on the deviations from some central value. [3]

where fi and fj are the ith and jth frequencies respectively.

The MNDif is the Gini coefficient applied to qualitative data.

VarNC

This is an analog of the variance.

It is the same index as Mueller and Schussler's Index of Qualitative Variation [4] and Gibbs' M2 index.

It is distributed as a chi square variable with K  1 degrees of freedom. [5]

StDev

Wilson has suggested two versions of this statistic.

The first is based on AvDev.

The second is based on MNDif

HRel

This index was originally developed by Claude Shannon for use in specifying the properties of communication channels.

where pi = fi / N.

This is equivalent to information entropy divided by the and is useful for comparing relative variation between frequency tables of multiple sizes.

B index

Wilcox adapted a proposal of Kaiser [6] based on the geometric mean and created the B' index. The B index is defined as

R packages

Several of these indices have been implemented in the R language. [7]

Gibbs & Poston Jr (1975) proposed six indexes. [8]

M1

The unstandardized index (M1) ( Gibbs & Poston Jr 1975 , p. 471) is

where K is the number of categories and is the proportion of observations that fall in a given category i.

M1 can be interpreted as one minus the likelihood that a random pair of samples will belong to the same category, [9] so this formula for IQV is a standardized likelihood of a random pair falling in the same category. This index has also referred to as the index of differentiation, the index of sustenance differentiation and the geographical differentiation index depending on the context it has been used in.

M2

A second index is the M2 [10] ( Gibbs & Poston Jr 1975 , p. 472) is:

where K is the number of categories and is the proportion of observations that fall in a given category i. The factor of is for standardization.

M1 and M2 can be interpreted in terms of variance of a multinomial distribution ( Swanson 1976 ) (there called an "expanded binomial model"). M1 is the variance of the multinomial distribution and M2 is the ratio of the variance of the multinomial distribution to the variance of a binomial distribution.

M4

The M4 index is

where m is the mean.

M6

The formula for M6 is

· where K is the number of categories, Xi is the number of data points in the ith category, N is the total number of data points, || is the absolute value (modulus) and

This formula can be simplified

where pi is the proportion of the sample in the ith category.

In practice M1 and M6 tend to be highly correlated which militates against their combined use.

The sum

has also found application. This is known as the Simpson index in ecology and as the Herfindahl index or the Herfindahl-Hirschman index (HHI) in economics. A variant of this is known as the Hunter–Gaston index in microbiology [11]

In linguistics and cryptanalysis this sum is known as the repeat rate. The incidence of coincidence (IC) is an unbiased estimator of this statistic [12]

where fi is the count of the ith grapheme in the text and n is the total number of graphemes in the text.

M1

The M1 statistic defined above has been proposed several times in a number of different settings under a variety of names. These include Gini's index of mutability, [13] Simpson's measure of diversity, [14] Bachi's index of linguistic homogeneity, [15] Mueller and Schuessler's index of qualitative variation, [16] Gibbs and Martin's index of industry diversification, [17] Lieberson's index. [18] and Blau's index in sociology, psychology and management studies. [19] The formulation of all these indices are identical.

Simpson's D is defined as

where n is the total sample size and ni is the number of items in the ith category.

For large n we have

Another statistic that has been proposed is the coefficient of unalikeability which ranges between 0 and 1. [20]

where n is the sample size and c(x,y) = 1 if x and y are unalike and 0 otherwise.

For large n we have

where K is the number of categories.

Another related statistic is the quadratic entropy

which is itself related to the Gini index.

M2

Greenberg's monolingual non weighted index of linguistic diversity [21] is the M2 statistic defined above.

M7

Another index – the M7 – was created based on the M4 index of Gibbs & Poston Jr (1975) [22]

where

and

where K is the number of categories, L is the number of subtypes, Oij and Eij are the number observed and expected respectively of subtype j in the ith category, ni is the number in the ith category and pj is the proportion of subtype j in the complete sample.

Note: This index was designed to measure women's participation in the work place: the two subtypes it was developed for were male and female.

Other single sample indices

These indices are summary statistics of the variation within the sample.

Berger–Parker index

The Berger–Parker index, named after Wolfgang H. Berger and Frances Lawrence Parker, equals the maximum value in the dataset, i.e. the proportional abundance of the most abundant type. [23] This corresponds to the weighted generalized mean of the values when q approaches infinity, and hence equals the inverse of true diversity of order infinity (1/D).

Brillouin index of diversity

This index is strictly applicable only to entire populations rather than to finite samples. It is defined as

where N is total number of individuals in the population, ni is the number of individuals in the ith category and N! is the factorial of N. Brillouin's index of evenness is defined as

where IB(max) is the maximum value of IB.

Hill's diversity numbers

Hill suggested a family of diversity numbers [24]

For given values of a, several of the other indices can be computed

Hill also suggested a family of evenness measures

where a > b.

Hill's E4 is

Hill's E5 is

Margalef's index

where S is the number of data types in the sample and N is the total size of the sample. [25]

Menhinick's index

where S is the number of data types in the sample and N is the total size of the sample. [26]

In linguistics this index is the identical with the Kuraszkiewicz index (Guiard index) where S is the number of distinct words (types) and N is the total number of words (tokens) in the text being examined. [27] [28] This index can be derived as a special case of the Generalised Torquist function. [29]

Q statistic

This is a statistic invented by Kempton and Taylor. [30] and involves the quartiles of the sample. It is defined as

where R1 and R2 are the 25% and 75% quartiles respectively on the cumulative species curve, nj is the number of species in the jth category, nRi is the number of species in the class where Ri falls (i = 1 or 2).

Shannon–Wiener index

This is taken from information theory

where N is the total number in the sample and pi is the proportion in the ith category.

In ecology where this index is commonly used, H usually lies between 1.5 and 3.5 and only rarely exceeds 4.0.

An approximate formula for the standard deviation (SD) of H is

where pi is the proportion made up by the ith category and N is the total in the sample.

A more accurate approximate value of the variance of H(var(H)) is given by [31]

where N is the sample size and K is the number of categories.

A related index is the Pielou J defined as

One difficulty with this index is that S is unknown for a finite sample. In practice S is usually set to the maximum present in any category in the sample.

Rényi entropy

The Rényi entropy is a generalization of the Shannon entropy to other values of q than unity. It can be expressed:

which equals

This means that taking the logarithm of true diversity based on any value of q gives the Rényi entropy corresponding to the same value of q.

The value of is also known as the Hill number. [24]

McIntosh's D and E

McIntosh proposed measure of diversity: [32]

where ni is the number in the ith category and K is the number of categories.

He also proposed several normalized versions of this index. First is D:

where N is the total sample size.

This index has the advantage of expressing the observed diversity as a proportion of the absolute maximum diversity at a given N.

Another proposed normalization is E — ratio of observed diversity to maximum possible diversity of a given N and K (i.e., if all species are equal in number of individuals):

Fisher's alpha

This was the first index to be derived for diversity. [33]

where K is the number of categories and N is the number of data points in the sample. Fisher's α has to be estimated numerically from the data.

The expected number of individuals in the rth category where the categories have been placed in increasing size is

where X is an empirical parameter lying between 0 and 1. While X is best estimated numerically an approximate value can be obtained by solving the following two equations

where K is the number of categories and N is the total sample size.

The variance of α is approximately [34]

Strong's index

This index (Dw) is the distance between the Lorenz curve of species distribution and the 45 degree line. It is closely related to the Gini coefficient. [35]

In symbols it is

where max() is the maximum value taken over the N data points, K is the number of categories (or species) in the data set and ci is the cumulative total up and including the ith category.

Simpson's E

This is related to Simpson's D and is defined as

where D is Simpson's D and K is the number of categories in the sample.

Smith & Wilson's indices

Smith and Wilson suggested a number of indices based on Simpson's D.

where D is Simpson's D and K is the number of categories.

Heip's index

where H is the Shannon entropy and K is the number of categories.

This index is closely related to Sheldon's index which is

where H is the Shannon entropy and K is the number of categories.

Camargo's index

This index was created by Camargo in 1993. [36]

where K is the number of categories and pi is the proportion in the ith category.

Smith and Wilson's B

This index was proposed by Smith and Wilson in 1996. [37]

where θ is the slope of the log(abundance)-rank curve.

Nee, Harvey, and Cotgreave's index

This is the slope of the log(abundance)-rank curve.

Bulla's E

There are two versions of this index - one for continuous distributions (Ec) and the other for discrete (Ed). [38]

where

is the Schoener–Czekanoski index, K is the number of categories and N is the sample size.

Horn's information theory index

This index (Rik) is based on Shannon's entropy. [39] It is defined as

where

In these equations xij and xkj are the number of times the jth data type appears in the ith or kth sample respectively.

Rarefaction index

In a rarefied sample a random subsample n in chosen from the total N items. In this sample some groups may be necessarily absent from this subsample. Let be the number of groups still present in the subsample of n items. is less than K the number of categories whenever at least one group is missing from this subsample.

The rarefaction curve, is defined as:

Note that 0 ≤ f(n) ≤ K.

Furthermore,

Despite being defined at discrete values of n, these curves are most frequently displayed as continuous functions. [40]

This index is discussed further in Rarefaction (ecology).

Caswell's V

This is a z type statistic based on Shannon's entropy. [41]

where H is the Shannon entropy, E(H) is the expected Shannon entropy for a neutral model of distribution and SD(H) is the standard deviation of the entropy. The standard deviation is estimated from the formula derived by Pielou

where pi is the proportion made up by the ith category and N is the total in the sample.

Lloyd & Ghelardi's index

This is

where K is the number of categories and K' is the number of categories according to MacArthur's broken stick model yielding the observed diversity.

Average taxonomic distinctness index

This index is used to compare the relationship between hosts and their parasites. [42] It incorporates information about the phylogenetic relationship amongst the host species.

where s is the number of host species used by a parasite and ωij is the taxonomic distinctness between host species i and j.

Index of qualitative variation

Several indices with this name have been proposed.

One of these is

where K is the number of categories and pi is the proportion of the sample that lies in the ith category.

Theil's H

This index is also known as the multigroup entropy index or the information theory index. It was proposed by Theil in 1972. [43] The index is a weighted average of the samples entropy.

Let

and

where pi is the proportion of type i in the ath sample, r is the total number of samples, ni is the size of the ith sample, N is the size of the population from which the samples were obtained and E is the entropy of the population.

Indices for comparison of two or more data types within a single sample

Several of these indexes have been developed to document the degree to which different data types of interest may coexist within a geographic area.

Index of dissimilarity

Let A and B be two types of data item. Then the index of dissimilarity is

where

Ai is the number of data type A at sample site i, Bi is the number of data type B at sample site i, K is the number of sites sampled and || is the absolute value.

This index is probably better known as the index of dissimilarity (D). [44] It is closely related to the Gini index.

This index is biased as its expectation under a uniform distribution is > 0.

A modification of this index has been proposed by Gorard and Taylor. [45] Their index (GT) is

Index of segregation

The index of segregation (IS) [46] is

where

and K is the number of units, Ai and ti is the number of data type A in unit i and the total number of all data types in unit i.

Hutchen's square root index

This index (H) is defined as [47]

where pi is the proportion of the sample composed of the ith variate.

Lieberson's isolation index

This index ( Lxy ) was invented by Lieberson in 1981. [48]

where Xi and Yi are the variables of interest at the ith site, K is the number of sites examined and Xtot is the total number of variate of type X in the study.

Bell's index

This index is defined as [49]

where px is the proportion of the sample made up of variates of type X and

where Nx is the total number of variates of type X in the study, K is the number of samples in the study and xi and pi are the number of variates and the proportion of variates of type X respectively in the ith sample.

Index of isolation

The index of isolation is

where K is the number of units in the study, Ai and ti is the number of units of type A and the number of all units in ith sample.

A modified index of isolation has also been proposed

The MII lies between 0 and 1.

Gorard's index of segregation

This index (GS) is defined as

where

and Ai and ti are the number of data items of type A and the total number of items in the ith sample.

Index of exposure

This index is defined as

where

and Ai and Bi are the number of types A and B in the ith category and ti is the total number of data points in the ith category.

Ochiai index

This is a binary form of the cosine index. [50] It is used to compare presence/absence data of two data types (here A and B). It is defined as

where a is the number of sample units where both A and B are found, b is number of sample units where A but not B occurs and c is the number of sample units where type B is present but not type A.

Kulczyński's coefficient

This coefficient was invented by Stanisław Kulczyński in 1927 [51] and is an index of association between two types (here A and B). It varies in value between 0 and 1. It is defined as

where a is the number of sample units where type A and type B are present, b is the number of sample units where type A but not type B is present and c is the number of sample units where type B is present but not type A.

Yule's Q

This index was invented by Yule in 1900. [52] It concerns the association of two different types (here A and B). It is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. Q varies in value between -1 and +1. In the ordinal case Q is known as the Goodman-Kruskal γ.

Because the denominator potentially may be zero, Leinhert and Sporer have recommended adding +1 to a, b, c and d. [53]

Yule's Y

This index is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present.

Baroni–Urbani–Buser coefficient

This index was invented by Baroni-Urbani and Buser in 1976. [54] It varies between 0 and 1 in value. It is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

When d = 0, this index is identical to the Jaccard index.

Hamman coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Rogers–Tanimoto coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size

Sokal–Sneath coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Sokal's binary distance

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Russel–Rao coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Phi coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present.

Soergel's coefficient

This coefficient is defined as

where b is the number of samples where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Simpson's coefficient

This coefficient is defined as

where b is the number of samples where type A is present but not type B, c is the number of samples where type B is present but not type A.

Dennis' coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Forbes' coefficient

This coefficient was proposed by Stephen Alfred Forbes in 1907. [55] It is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size (N = a + b + c + d).

A modification of this coefficient which does not require the knowledge of d has been proposed by Alroy [56]

Where n = a + b + c.

Simple match coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Fossum's coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Stile's coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A, d is the sample count where neither type A nor type B are present, n equals a + b + c + d and || is the modulus (absolute value) of the difference.

Michael's coefficient

This coefficient is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present.

Peirce's coefficient

In 1884 Charles Peirce suggested [57] the following coefficient

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present.

Hawkin–Dotson coefficient

In 1975 Hawkin and Dotson proposed the following coefficient

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Benini coefficient

In 1901 Benini proposed the following coefficient

where a is the number of samples where types A and B are both present, b is where type A is present but not type B and c is the number of samples where type B is present but not type A. Min(b, c) is the minimum of b and c.

Gilbert coefficient

Gilbert proposed the following coefficient

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the sample count where neither type A nor type B are present. N is the sample size.

Gini index

The Gini index is

where a is the number of samples where types A and B are both present, b is where type A is present but not type B and c is the number of samples where type B is present but not type A.

Modified Gini index

The modified Gini index is

where a is the number of samples where types A and B are both present, b is where type A is present but not type B and c is the number of samples where type B is present but not type A.

Kuhn's index

Kuhn proposed the following coefficient in 1965

where a is the number of samples where types A and B are both present, b is where type A is present but not type B and c is the number of samples where type B is present but not type A. K is a normalizing parameter. N is the sample size.

This index is also known as the coefficient of arithmetic means.

Eyraud index

Eyraud proposed the following coefficient in 1936

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the number of samples where both A and B are not present.

Soergel distance

This is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the number of samples where both A and B are not present. N is the sample size.

Tanimoto index

This is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A and d is the number of samples where both A and B are not present. N is the sample size.

Piatetsky–Shapiro's index

This is defined as

where a is the number of samples where types A and B are both present, b is where type A is present but not type B, c is the number of samples where type B is present but not type A.

Indices for comparison between two or more samples

Czekanowski's quantitative index

This is also known as the Bray–Curtis index, Schoener's index, least common percentage index, index of affinity or proportional similarity. It is related to the Sørensen similarity index.

where xi and xj are the number of species in sites i and j respectively and the minimum is taken over the number of species in common between the two sites.

Canberra metric

The Canberra distance is a weighted version of the L1 metric. It was introduced by introduced in 1966 [58] and refined in 1967 [59] by G. N. Lance and W. T. Williams. It is used to define a distance between two vectors – here two sites with K categories within each site.

The Canberra distance d between vectors p and q in a K-dimensional real vector space is

where pi and qi are the values of the ith category of the two vectors.

Sorensen's coefficient of community

This is used to measure similarities between communities.

where s1 and s2 are the number of species in community 1 and 2 respectively and c is the number of species common to both areas.

Jaccard's index

This is a measure of the similarity between two samples:

where A is the number of data points shared between the two samples and B and C are the data points found only in the first and second samples respectively.

This index was invented in 1902 by the Swiss botanist Paul Jaccard. [60]

Under a random distribution the expected value of J is [61]

The standard error of this index with the assumption of a random distribution is

where N is the total size of the sample.

Dice's index

This is a measure of the similarity between two samples:

where A is the number of data points shared between the two samples and B and C are the data points found only in the first and second samples respectively.

Match coefficient

This is a measure of the similarity between two samples:

where N is the number of data points in the two samples and B and C are the data points found only in the first and second samples respectively.

Morisita's index

Masaaki Morisita's index of dispersion ( Im ) is the scaled probability that two points chosen at random from the whole population are in the same sample. [62] Higher values indicate a more clumped distribution.

An alternative formulation is

where n is the total sample size, m is the sample mean and x are the individual values with the sum taken over the whole sample. It is also equal to

where IMC is Lloyd's index of crowding. [63]

This index is relatively independent of the population density but is affected by the sample size.

Morisita showed that the statistic [62]

is distributed as a chi-squared variable with n  1 degrees of freedom.

An alternative significance test for this index has been developed for large samples. [64]

where m is the overall sample mean, n is the number of sample units and z is the normal distribution abscissa. Significance is tested by comparing the value of z against the values of the normal distribution.

Morisita's overlap index

Morisita's overlap index is used to compare overlap among samples. [65] The index is based on the assumption that increasing the size of the samples will increase the diversity because it will include different habitats

xi is the number of times species i is represented in the total X from one sample.
yi is the number of times species i is represented in the total Y from another sample.
Dx and Dy are the Simpson's index values for the x and y samples respectively.
S is the number of unique species

CD = 0 if the two samples do not overlap in terms of species, and CD = 1 if the species occur in the same proportions in both samples.

Horn's introduced a modification of the index [66]

Standardised Morisita's index

Smith-Gill developed a statistic based on Morisita's index which is independent of both sample size and population density and bounded by −1 and +1. This statistic is calculated as follows [67]

First determine Morisita's index ( Id ) in the usual fashion. Then let k be the number of units the population was sampled from. Calculate the two critical values

where χ2 is the chi square value for n  1 degrees of freedom at the 97.5% and 2.5% levels of confidence.

The standardised index ( Ip ) is then calculated from one of the formulae below

When IdMc > 1

When Mc > Id ≥ 1

When 1 > IdMu

When 1 > Mu > Id

Ip ranges between +1 and −1 with 95% confidence intervals of ±0.5. Ip has the value of 0 if the pattern is random; if the pattern is uniform, Ip < 0 and if the pattern shows aggregation, Ip > 0.

Peet's evenness indices

These indices are a measure of evenness between samples. [68]

where I is an index of diversity, Imax and Imin are the maximum and minimum values of I between the samples being compared.

Loevinger's coefficient

Loevinger has suggested a coefficient H defined as follows:

where pmax and pmin are the maximum and minimum proportions in the sample.

Tversky index

The Tversky index [69] is an asymmetric measure that lies between 0 and 1.

For samples A and B the Tversky index (S) is

The values of α and β are arbitrary. Setting both α and β to 0.5 gives Dice's coefficient. Setting both to 1 gives Tanimoto's coefficient.

A symmetrical variant of this index has also been proposed. [70]

where

Several similar indices have been proposed.

Monostori et al. proposed the SymmetricSimilarity index [71]

where d(X) is some measure of derived from X.

Bernstein and Zobel have proposed the S2 and S3 indexes [72]

S3 is simply twice the SymmetricSimilarity index. Both are related to Dice's coefficient

Metrics used

A number of metrics (distances between samples) have been proposed.

Euclidean distance

While this is usually used in quantitative work it may also be used in qualitative work. This is defined as

where djk is the distance between xij and xik.

Gower's distance

This is defined as

where di is the distance between the ith samples and wi is the weighing give to the ith distance.

Manhattan distance

While this is more commonly used in quantitative work it may also be used in qualitative work. This is defined as

where djk is the distance between xij and xik and || is the absolute value of the difference between xij and xik.

A modified version of the Manhattan distance can be used to find a zero (root) of a polynomial of any degree using Lill's method.

Prevosti's distance

This is related to the Manhattan distance. It was described by Prevosti et al. and was used to compare differences between chromosomes. [73] Let P and Q be two collections of r finite probability distributions. Let these distributions have values that are divided into k categories. Then the distance DPQ is

where r is the number of discrete probability distributions in each population, kj is the number of categories in distributions Pj and Qj and pji (respectively qji) is the theoretical probability of category i in distribution Pj (Qj) in population P(Q).

Its statistical properties were examined by Sanchez et al. [74] who recommended a bootstrap procedure to estimate confidence intervals when testing for differences between samples.

Other metrics

Let

where min(x,y) is the lesser value of the pair x and y.

Then

is the Manhattan distance,

is the Bray−Curtis distance,

is the Jaccard (or Ruzicka) distance and

is the Kulczynski distance.

Similarities between texts

HaCohen-Kerner et al. have proposed a variety of metrics for comparing two or more texts. [75]

Ordinal data

If the categories are at least ordinal then a number of other indices may be computed.

Leik's D

Leik's measure of dispersion (D) is one such index. [76] Let there be K categories and let pi be fi/N where fi is the number in the ith category and let the categories be arranged in ascending order. Let

where aK. Let da = ca if ca ≤ 0.5 and 1  ca ≤ 0.5 otherwise. Then

Normalised Herfindahl measure

This is the square of the coefficient of variation divided by N  1 where N is the sample size.

where m is the mean and s is the standard deviation.

Potential-for-conflict Index

The potential-for-conflict Index (PCI) describes the ratio of scoring on either side of a rating scale's centre point. [77] This index requires at least ordinal data. This ratio is often displayed as a bubble graph.

The PCI uses an ordinal scale with an odd number of rating points (−n to +n) centred at 0. It is calculated as follows

where Z = 2n, |·| is the absolute value (modulus), r+ is the number of responses in the positive side of the scale, r is the number of responses in the negative side of the scale, X+ are the responses on the positive side of the scale, X are the responses on the negative side of the scale and

Theoretical difficulties are known to exist with the PCI. The PCI can be computed only for scales with a neutral center point and an equal number of response options on either side of it. Also a uniform distribution of responses does not always yield the midpoint of the PCI statistic but rather varies with the number of possible responses or values in the scale. For example, five-, seven- and nine-point scales with a uniform distribution of responses give PCIs of 0.60, 0.57 and 0.50 respectively.

The first of these problems is relatively minor as most ordinal scales with an even number of response can be extended (or reduced) by a single value to give an odd number of possible responses. Scale can usually be recentred if this is required. The second problem is more difficult to resolve and may limit the PCI's applicability.

The PCI has been extended [78]

where K is the number of categories, ki is the number in the ith category, dij is the distance between the ith and ith categories, and δ is the maximum distance on the scale multiplied by the number of times it can occur in the sample. For a sample with an even number of data points

and for a sample with an odd number of data points

where N is the number of data points in the sample and dmax is the maximum distance between points on the scale.

Vaske et al. suggest a number of possible distance measures for use with this index. [78]

if the signs (+ or −) of ri and rj differ. If the signs are the same dij = 0.

where p is an arbitrary real number > 0.

if sign(ri ) ≠ sign(ri ) and p is a real number > 0. If the signs are the same then dij = 0. m is D1, D2 or D3.

The difference between D1 and D2 is that the first does not include neutrals in the distance while the latter does. For example, respondents scoring −2 and +1 would have a distance of 2 under D1 and 3 under D2.

The use of a power (p) in the distances allows for the rescaling of extreme responses. These differences can be highlighted with p > 1 or diminished with p < 1.

In simulations with a variates drawn from a uniform distribution the PCI2 has a symmetric unimodal distribution. [78] The tails of its distribution are larger than those of a normal distribution.

Vaske et al. suggest the use of a t test to compare the values of the PCI between samples if the PCIs are approximately normally distributed.

van der Eijk's A

This measure is a weighted average of the degree of agreement the frequency distribution. [79] A ranges from −1 (perfect bimodality) to +1 (perfect unimodality). It is defined as

where U is the unimodality of the distribution, S the number of categories that have nonzero frequencies and K the total number of categories.

The value of U is 1 if the distribution has any of the three following characteristics:

With distributions other than these the data must be divided into 'layers'. Within a layer the responses are either equal or zero. The categories do not have to be contiguous. A value for A for each layer (Ai) is calculated and a weighted average for the distribution is determined. The weights (wi) for each layer are the number of responses in that layer. In symbols

A uniform distribution has A = 0: when all the responses fall into one category A = +1.

One theoretical problem with this index is that it assumes that the intervals are equally spaced. This may limit its applicability.

Birthday problem

If there are n units in the sample and they are randomly distributed into k categories (nk), this can be considered a variant of the birthday problem. [80] The probability (p) of all the categories having only one unit is

If c is large and n is small compared with k2/3 then to a good approximation

This approximation follows from the exact formula as follows:

Sample size estimates

For p = 0.5 and p = 0.05 respectively the following estimates of n may be useful

This analysis can be extended to multiple categories. For p = 0.5 and p 0.05 we have respectively

where ci is the size of the ith category. This analysis assumes that the categories are independent.

If the data is ordered in some fashion then for at least one event occurring in two categories lying within j categories of each other than a probability of 0.5 or 0.05 requires a sample size (n) respectively of [81]

where k is the number of categories.

Birthday-death day problem

Whether or not there is a relation between birthdays and death days has been investigated with the statistic [82]

where d is the number of days in the year between the birthday and the death day.

Rand index

The Rand index is used to test whether two or more classification systems agree on a data set. [83]

Given a set of elements and two partitions of to compare, , a partition of S into r subsets, and , a partition of S into s subsets, define the following:

The Rand index - - is defined as

Intuitively, can be considered as the number of agreements between and and as the number of disagreements between and .

Adjusted Rand index

The adjusted Rand index is the corrected-for-chance version of the Rand index. [83] [84] [85] Though the Rand Index may only yield a value between 0 and +1, the adjusted Rand index can yield negative values if the index is less than the expected index. [86]

The contingency table

Given a set of elements, and two groupings or partitions (e.g. clusterings) of these points, namely and , the overlap between and can be summarized in a contingency table where each entry denotes the number of objects in common between and  : .

X\YSums
Sums

Definition

The adjusted form of the Rand Index, the Adjusted Rand Index, is

more specifically

where are values from the contingency table.

Since the denominator is the total number of pairs, the Rand index represents the frequency of occurrence of agreements over the total pairs, or the probability that and will agree on a randomly chosen pair.

Evaluation of indices

Different indices give different values of variation, and may be used for different purposes: several are used and critiqued in the sociology literature especially.

If one wishes to simply make ordinal comparisons between samples (is one sample more or less varied than another), the choice of IQV is relatively less important, as they will often give the same ordering.

Where the data is ordinal a method that may be of use in comparing samples is ORDANOVA.

In some cases it is useful to not standardize an index to run from 0 to 1, regardless of number of categories or samples ( Wilcox 1973 , pp. 338), but one generally so standardizes it.

See also

Notes

  1. This can only happen if the number of cases is a multiple of the number of categories.
  2. Freemen LC (1965) Elementary applied statistics. New York: John Wiley and Sons pp. 40–43
  3. Kendal MC, Stuart A (1958) The advanced theory of statistics. Hafner Publishing Company p. 46
  4. Mueller JE, Schuessler KP (1961) Statistical reasoning in sociology. Boston: Houghton Mifflin Company. pp. 177–179
  5. Wilcox (1967), p. [ page needed ].
  6. Kaiser HF (1968) "A measure of the population quality of legislative apportionment." The American Political Science Review 62 (1) 208
  7. Joel Gombin (August 18, 2015). "qualvar: Initial release (Version v0.1)". Zenodo. doi:10.5281/zenodo.28341.
  8. Gibbs & Poston Jr (1975).
  9. Lieberson (1969), p. 851.
  10. IQV at xycoon
  11. Hunter, PR; Gaston, MA (1988). "Numerical index of the discriminatory ability of typing systems: an application of Simpson's index of diversity". J Clin Microbiol. 26 (11): 2465–2466. doi:10.1128/jcm.26.11.2465-2466.1988. PMC   266921 . PMID   3069867.
  12. Friedman WF (1925) The incidence of coincidence and its applications in cryptanalysis. Technical Paper. Office of the Chief Signal Officer. United States Government Printing Office.
  13. Gini CW (1912) Variability and mutability, contribution to the study of statistical distributions and relations. Studi Economico-Giuricici della R. Universita de Cagliari
  14. Simpson, EH (1949). "Measurement of diversity". Nature. 163 (4148): 688. Bibcode:1949Natur.163..688S. doi: 10.1038/163688a0 .
  15. Bachi R (1956) A statistical analysis of the revival of Hebrew in Israel. In: Bachi R (ed) Scripta Hierosolymitana, Vol III, Jerusalem: Magnus press pp 179–247
  16. Mueller JH, Schuessler KF (1961) Statistical reasoning in sociology. Boston: Houghton Mifflin
  17. Gibbs, JP; Martin, WT (1962). "Urbanization, technology and division of labor: International patterns". American Sociological Review. 27 (5): 667–677. doi:10.2307/2089624. JSTOR   2089624.
  18. Lieberson (1969), p. [ page needed ].
  19. Blau P (1977) Inequality and Heterogeneity. Free Press, New York
  20. Perry M, Kader G (2005) Variation as unalikeability. Teaching Stats 27 (2) 58–60
  21. Greenberg, JH (1956). "The measurement of linguistic diversity". Language. 32 (1): 109–115. doi:10.2307/410659. JSTOR   410659.
  22. Lautard EH (1978) PhD thesis.[ full citation needed ]
  23. Berger, WH; Parker, FL (1970). "Diversity of planktonic Foramenifera in deep sea sediments". Science. 168 (3937): 1345–1347. Bibcode:1970Sci...168.1345B. doi:10.1126/science.168.3937.1345. PMID   17731043. S2CID   29553922.
  24. 1 2 Hill, M O (1973). "Diversity and evenness: a unifying notation and its consequences". Ecology. 54 (2): 427–431. Bibcode:1973Ecol...54..427H. doi:10.2307/1934352. JSTOR   1934352.
  25. Margalef R (1958) Temporal succession and spatial heterogeneity in phytoplankton. In: Perspectives in marine biology. Buzzati-Traverso (ed) Univ Calif Press, Berkeley pp 323–347
  26. Menhinick, EF (1964). "A comparison of some species-individuals diversity indices applied to samples of field insects". Ecology. 45 (4): 859–861. Bibcode:1964Ecol...45..859M. doi:10.2307/1934933. JSTOR   1934933.
  27. Kuraszkiewicz W (1951) Nakladen Wroclawskiego Towarzystwa Naukowego
  28. Guiraud P (1954) Les caractères statistiques du vocabulaire. Presses Universitaires de France, Paris
  29. Panas E (2001) The Generalized Torquist: Specification and estimation of a new vocabulary-text size function. J Quant Ling 8(3) 233–252
  30. Kempton, RA; Taylor, LR (1976). "Models and statistics for species diversity". Nature. 262 (5571): 818–820. Bibcode:1976Natur.262..818K. doi:10.1038/262818a0. PMID   958461. S2CID   4168222.
  31. Hutcheson K (1970) A test for comparing diversities based on the Shannon formula. J Theo Biol 29: 151–154
  32. McIntosh RP (1967). An Index of Diversity and the Relation of Certain Concepts to Diversity. Ecology, 48(3), 392–404
  33. Fisher RA, Corbet A, Williams CB (1943) The relation between the number of species and the number of individuals in a random sample of an animal population. Animal Ecol 12: 42–58
  34. Anscombe (1950) Sampling theory of the negative binomial and logarithmic series distributions. Biometrika 37: 358–382
  35. Strong, WL (2002). "Assessing species abundance uneveness within and between plant communities" (PDF). Community Ecology. 3 (2): 237–246. doi:10.1556/comec.3.2002.2.9.
  36. Camargo JA (1993) Must dominance increase with the number of subordinate species in competitive interactions? J. Theor Biol 161 537–542
  37. Smith, Wilson (1996)[ full citation needed ]
  38. Bulla, L (1994). "An index of evenness and its associated diversity measure". Oikos. 70 (1): 167–171. Bibcode:1994Oikos..70..167B. doi:10.2307/3545713. JSTOR   3545713.
  39. Horn, HS (1966). "Measurement of 'overlap' in comparative ecological studies". Am Nat. 100 (914): 419–423. doi:10.1086/282436. S2CID   84469180.
  40. Siegel, Andrew F (2006) "Rarefaction curves." Encyclopedia of Statistical Sciences 10.1002/0471667196.ess2195.pub2.
  41. Caswell H (1976) Community structure: a neutral model analysis. Ecol Monogr 46: 327–354
  42. Poulin, R; Mouillot, D (2003). "Parasite specialization from a phylogenetic perspective: a new index of host specificity". Parasitology. 126 (5): 473–480. CiteSeerX   10.1.1.574.7432 . doi:10.1017/s0031182003002993. PMID   12793652. S2CID   9440341.
  43. Theil H (1972) Statistical decomposition analysis. Amsterdam: North-Holland Publishing Company>
  44. Duncan OD, Duncan B (1955) A methodological analysis of segregation indexes. Am Sociol Review, 20: 210–217
  45. Gorard S, Taylor C (2002b) What is segregation? A comparison of measures in terms of 'strong' and 'weak' compositional invariance. Sociology, 36(4), 875–895
  46. Massey, DS; Denton, NA (1988). "The dimensions of residential segregation". Social Forces. 67 (2): 281–315. doi: 10.1093/sf/67.2.281 .
  47. Hutchens RM (2004) One measure of segregation. International Economic Review 45: 555–578
  48. Lieberson S (1981). "An asymmetrical approach to segregation". In Peach C, Robinson V, Smith S (eds.). Ethnic segregation in cities. London: Croom Helm. pp. 61–82.
  49. Bell, W (1954). "A probability model for the measurement of ecological segregation". Social Forces. 32 (4): 357–364. doi:10.2307/2574118. JSTOR   2574118.
  50. Ochiai A (1957) Zoogeographic studies on the soleoid fishes found in Japan and its neighbouring regions. Bull Jpn Soc Sci Fish 22: 526–530
  51. Kulczynski S (1927) Die Pflanzenassoziationen der Pieninen. Bulletin International de l'Académie Polonaise des Sciences et des Lettres, Classe des Sciences
  52. Yule GU (1900) On the association of attributes in statistics. Philos Trans Roy Soc
  53. Lienert GA and Sporer SL (1982) Interkorrelationen seltner Symptome mittels Nullfeldkorrigierter YuleKoeffizienten. Psychologische Beitrage 24: 411–418
  54. Baroni-Urbani, C; Buser, MW (1976). "similarity of binary Data". Systematic Biology. 25 (3): 251–259. doi:10.2307/2412493. JSTOR   2412493.
  55. Forbes SA (1907) On the local distribution of certain Illinois fishes: an essay in statistical ecology. Bulletin of the Illinois State Laboratory of Natural History 7:272–303
  56. Alroy J (2015) A new twist on a very old binary similarity coefficient. Ecology 96 (2) 575-586
  57. Carl R. Hausman and Douglas R. Anderson (2012). Conversations on Peirce: Reals and Ideals. Fordham University Press. p. 221. ISBN   9780823234677.
  58. Lance, G. N.; Williams, W. T. (1966). "Computer programs for hierarchical polythetic classification ("similarity analysis")". Computer Journal. 9 (1): 60–64. doi: 10.1093/comjnl/9.1.60 .
  59. Lance, G. N.; Williams, W. T. (1967). "Mixed-data classificatory programs I.) Agglomerative Systems". Australian Computer Journal: 15–20.
  60. Jaccard P (1902) Lois de distribution florale. Bulletin de la Socíeté Vaudoise des Sciences Naturelles 38:67-130
  61. Archer AW and Maples CG (1989) Response of selected binomial coefficients to varying degrees of matrix sparseness and to matrices with known data interrelationships. Mathematical Geology 21: 741–753
  62. 1 2 Morisita M (1959) Measuring the dispersion and the analysis of distribution patterns. Memoirs of the Faculty of Science, Kyushu University Series E. Biol 2:215–235
  63. Lloyd M (1967) Mean crowding. J Anim Ecol 36: 1–30
  64. Pedigo LP & Buntin GD (1994) Handbook of sampling methods for arthropods in agriculture. CRC Boca Raton FL
  65. Morisita M (1959) Measuring of the dispersion and analysis of distribution patterns. Memoirs of the Faculty of Science, Kyushu University, Series E Biology. 2: 215–235
  66. Horn, HS (1966). "Measurement of "Overlap" in comparative ecological studies". The American Naturalist. 100 (914): 419–424. doi:10.1086/282436. S2CID   84469180.
  67. Smith-Gill SJ (1975). "Cytophysiological basis of disruptive pigmentary patterns in the leopard frog Rana pipiens. II. Wild type and mutant cell specific patterns". J Morphol. 146 (1): 35–54. doi:10.1002/jmor.1051460103. PMID   1080207. S2CID   23780609.
  68. Peet (1974) The measurements of species diversity. Annu Rev Ecol Syst 5: 285–307
  69. Tversky, Amos (1977). "Features of Similarity" (PDF). Psychological Review. 84 (4): 327–352. doi:10.1037/0033-295x.84.4.327.
  70. Jimenez S, Becerra C, Gelbukh A SOFTCARDINALITY-CORE: Improving text overlap with distributional measures for semantic textual similarity. Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the main conference and the shared task: semantic textual similarity, p194-201. June 7–8, 2013, Atlanta, Georgia, USA
  71. Monostori K, Finkel R, Zaslavsky A, Hodasz G and Patke M (2002) Comparison of overlap detection techniques. In: Proceedings of the 2002 International Conference on Computational Science. Lecture Notes in Computer Science 2329: 51-60
  72. Bernstein Y and Zobel J (2004) A scalable system for identifying co-derivative documents. In: Proceedings of 11th International Conference on String Processing and Information Retrieval (SPIRE) 3246: 55-67
  73. Prevosti, A; Ribo, G; Serra, L; Aguade, M; Balanya, J; Monclus, M; Mestres, F (1988). "Colonization of America by Drosophila subobscura: experiment in natural populations that supports the adaptive role of chromosomal inversion polymorphism". Proc Natl Acad Sci USA. 85 (15): 5597–5600. Bibcode:1988PNAS...85.5597P. doi: 10.1073/pnas.85.15.5597 . PMC   281806 . PMID   16593967.
  74. Sanchez, A; Ocana, J; Utzetb, F; Serrac, L (2003). "Comparison of Prevosti genetic distances". Journal of Statistical Planning and Inference. 109 (1–2): 43–65. doi:10.1016/s0378-3758(02)00297-5.
  75. HaCohen-Kerner Y, Tayeb A and Ben-Dror N (2010) Detection of simple plagiarism in computer science papers. In: Proceedings of the 23rd International Conference on Computational Linguistics pp 421-429
  76. Leik R (1966) A measure of ordinal consensus. Pacific sociological review 9 (2): 85–90
  77. Manfredo M, Vaske, JJ, Teel TL (2003) The potential for conflict index: A graphic approach tp practical significance of human dimensions research. Human Dimensions of Wildlife 8: 219–228
  78. 1 2 3 Vaske JJ, Beaman J, Barreto H, Shelby LB (2010) An extension and further validation of the potential for conflict index. Leisure Sciences 32: 240–254
  79. Van der Eijk C (2001) Measuring agreement in ordered rating scales. Quality and quantity 35(3): 325–341
  80. Von Mises R (1939) Uber Aufteilungs-und Besetzungs-Wahrcheinlichkeiten. Revue de la Facultd des Sciences de de I'Universite d'lstanbul NS 4: 145−163
  81. Sevast'yanov BA (1972) Poisson limit law for a scheme of sums of dependent random variables. (trans. S. M. Rudolfer) Theory of probability and its applications, 17: 695−699
  82. Hoaglin DC, Mosteller, F and Tukey, JW (1985) Exploring data tables, trends, and shapes, New York: John Wiley
  83. 1 2 W. M. Rand (1971). "Objective criteria for the evaluation of clustering methods". Journal of the American Statistical Association . 66 (336): 846–850. arXiv: 1704.01036 . doi:10.2307/2284239. JSTOR   2284239.
  84. Lawrence Hubert and Phipps Arabie (1985). "Comparing partitions". Journal of Classification. 2 (1): 193–218. doi:10.1007/BF01908075. S2CID   189915041.
  85. Nguyen Xuan Vinh, Julien Epps and James Bailey (2009). "Information Theoretic Measures for Clustering Comparison: Is a Correction for Chance Necessary?" (PDF). ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning. ACM. pp. 1073–1080. Archived from the original (PDF) on 25 March 2012. PDF.
  86. Wagner, Silke; Wagner, Dorothea (12 January 2007). "Comparing Clusterings - An Overview" (PDF). Retrieved 14 February 2018.

Related Research Articles

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

<span class="mw-page-title-main">Entropy (information theory)</span> Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable , which takes values in the set and is distributed according to , the entropy is where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits, while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.

In probability theory and statistics, kurtosis refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtosis provides insight into specific characteristics of a distribution. Various methods exist for quantifying kurtosis in theoretical distributions, and corresponding techniques allow estimation based on sample data from a population. It’s important to note that different measures of kurtosis can yield varying interpretations.

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.

<span class="mw-page-title-main">Geometric distribution</span> Probability distribution

In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

Pearson's chi-squared test or Pearson's test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests – statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900. In contexts where it is important to improve a distinction between the test statistic and its distribution, names similar to Pearson χ-squared test or statistic are used.

<span class="mw-page-title-main">Unified neutral theory of biodiversity</span> Theory of evolutionary biology

The unified neutral theory of biodiversity and biogeography is a theory and the title of a monograph by ecologist Stephen P. Hubbell. It aims to explain the diversity and relative abundance of species in ecological communities. Like other neutral theories of ecology, Hubbell assumes that the differences between members of an ecological community of trophically similar species are "neutral", or irrelevant to their success. This implies that niche differences do not influence abundance and the abundance of each species follows a random walk. The theory has sparked controversy, and some authors consider it a more complex version of other null models that fit the data better.

<span class="mw-page-title-main">Multimodal distribution</span> Probability distribution with more than one mode

In statistics, a multimodaldistribution is a probability distribution with more than one mode. These appear as distinct peaks in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form multimodal distributions. Among univariate analyses, multimodal distributions are commonly bimodal.

<span class="mw-page-title-main">Dirichlet distribution</span> Probability distribution

In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct for decision trees' habit of overfitting to their training set.

<span class="mw-page-title-main">Jaccard index</span> Measure of similarity and diversity between sets

The Jaccard index, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets.

Rietveld refinement is a technique described by Hugo Rietveld for use in the characterisation of crystalline materials. The neutron and X-ray diffraction of powder samples results in a pattern characterised by reflections at certain positions. The height, width and position of these reflections can be used to determine many aspects of the material's structure.

<span class="mw-page-title-main">Rand index</span> Measure of similarity between two data clusterings

The Rand index or Rand measure in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of elements, this is the adjusted Rand index. The Rand index is the accuracy of determining if a link belongs within a cluster or not.

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient, is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938, though Gustav Fechner had proposed a similar measure in the context of time series in 1897.

In probability theory and statistics, a categorical distribution is a discrete probability distribution that describes the possible results of a random variable that can take on one of K possible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution,. The K-dimensional categorical distribution is the most general distribution over a K-way event; any other discrete distribution over a size-K sample space is a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1.

In statistics, the phi coefficient is a measure of association for two binary variables.

Taylor's power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship. It is named after the ecologist who first proposed it in 1961, Lionel Roy Taylor (1924–2007). Taylor's original name for this relationship was the law of the mean. The name Taylor's law was coined by Southwood in 1966.

<span class="mw-page-title-main">Hyperbolastic functions</span> Mathematical functions

The hyperbolastic functions, also known as hyperbolastic growth models, are mathematical functions that are used in medical statistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac. The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection. These functions can be used in a wide variety of modeling problems such as tumor growth, stem cell proliferation, pharma kinetics, cancer growth, sigmoid activation function in neural networks, and epidemiological disease progression or regression.

References