G-test

Last updated

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended. [1]

Contents

The general formula for G is

where is the observed count in a cell, is the expected count under the null hypothesis, denotes the natural logarithm, and the sum is taken over all non-empty cells. Furthermore, the total observed count should be equal to the total expected count:

where is the total number of observations.

G-tests have been recommended at least since the 1981 edition of Biometry, a statistics textbook by Robert R. Sokal and F. James Rohlf. [2]

Derivation

We can derive the value of the G-test from the log-likelihood ratio test where the underlying model is a multinomial model.

Suppose we had a sample where each is the number of times that an object of type was observed. Furthermore, let be the total number of objects observed. If we assume that the underlying model is multinomial, then the test statistic is defined by

where is the null hypothesis and is the maximum likelihood estimate (MLE) of the parameters given the data. Recall that for the multinomial model, the MLE of given some data is defined by

Furthermore, we may represent each null hypothesis parameter as

Thus, by substituting the representations of and in the log-likelihood ratio, the equation simplifies to

Relabel the variables with and with . Finally, multiply by a factor of (used to make the G test formula asymptotically equivalent to the Pearson's chi-squared test formula) to achieve the form

Distribution and usage

Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of G is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test.

For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the G-test. [3] McDonald recommends to always use an exact test (exact test of goodness-of-fit, Fisher's exact test) if the total sample size is less than 1000.

There is nothing magical about a sample size of 1000, it's just a nice round number that is well within the range where an exact test, chi-square test and G–test will give almost identical P values. Spreadsheets, web-page calculators, and SAS shouldn't have any problem doing an exact test on a sample size of 1000.

John H. McDonald, Handbook of Biological Statistics

Relation to the chi-squared test

The commonly used chi-squared tests for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the G-tests are based. [4]

The general formula for Pearson's chi-squared test statistic is

The approximation of G by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1. To see this consider

,

and let with , so that the total number of counts remains the same. Upon substitution we find,

.

A Taylor expansion around can be performed using . The result is

, and distributing terms we find,
.

Now, using the fact that and , we can write the result,

.

This shows that when the observed counts are close to the expected counts . When this difference is large, however, the approximation begins to break down. Here, the effects of outliers in data will be more pronounced, and this explains the why tests fail in situations with little data.

For samples of a reasonable size, the G-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the G-test is better than for the Pearson's chi-squared test. [5] In cases where for some cell case the G-test is always better than the chi-squared test.[ citation needed ]

For testing goodness-of-fit the G-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann. [6] [7]

Relation to Kullback–Leibler divergence

The G-test statistic is proportional to the Kullback–Leibler divergence of the theoretical distribution from the empirical distribution:

where N is the total number of observations and and are the empirical and theoretical frequencies, respectively.

Relation to mutual information

For analysis of contingency tables the value of G can also be expressed in terms of mutual information.

Let

, , , and .

Then G can be expressed in several alternative forms:

where the entropy of a discrete random variable is defined as

and where

is the mutual information between the row vector r and the column vector c of the contingency table.

It can also be shown[ citation needed ] that the inverse document frequency weighting commonly used for text retrieval is an approximation of G applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the G statistic.[ citation needed ]

Application

Statistical software

Related Research Articles

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf(p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f(p).

Gamma distribution Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distribution. There are two different parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ.
  2. With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.

In statistics, the logistic model is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one.

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

Squeeze theorem On calculating limits by bounding a function between two other functions

In calculus, the squeeze theorem, also known as the pinching theorem, the sandwich theorem, the sandwich rule, the police theorem, the between theorem and sometimes the squeeze lemma, is a theorem regarding the limit of a function. In Italy, the theorem is also known as theorem of carabinieri.

In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.

The classical XY model is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for n = 2.

Boltzmann machine

A Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field. It was translated from statistical physics for use in cognitive science. The Boltzmann machine is based on a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model that is a stochastic Ising Model and applied to machine learning.

Arc length Distance along a curve

Arc length is the distance between two points along a section of a curve.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In statistics, the multinomial test is the test of the null hypothesis that the parameters of a multinomial distribution equal specified values. It is used for categorical data; see Read and Cressie.

Gravitational lensing formalism

In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to

Maximum spacing estimation

In statistics, maximum spacing estimation, or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model. The method requires maximization of the geometric mean of spacings in the data, which are the differences between the values of the cumulative distribution function at neighbouring data points.

In statistics, local asymptotic normality is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of iid sampling from a regular parametric model.

In quantum computing, the quantum phase estimation algorithm, is a quantum algorithm to estimate the phase of an eigenvector of a unitary operator. More precisely, given a unitary matrix and a quantum state such that , the algorithm estimates the value of with high probability within additive error , using qubits and controlled-U operations. The algorithm was initially introduced by Alexei Kitaev in 1995.

In statistics, the variance function is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

References

  1. McDonald, J.H. (2014). "G–test of goodness-of-fit". Handbook of Biological Statistics (Third ed.). Baltimore, Maryland: Sparky House Publishing. pp. 53–58.
  2. Sokal, R. R.; Rohlf, F. J. (1981). Biometry: The Principles and Practice of Statistics in Biological Research (Second ed.). New York: Freeman. ISBN   978-0-7167-2411-7.
  3. McDonald, J.H. (2014). "Small numbers in chi-square and G–tests". Handbook of Biological Statistics (Third ed.). Baltimore, Maryland: Sparky House Publishing. pp. 86–89.
  4. Hoey, J. (2012). "The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test". arXiv: 1206.4881 [stat.ME].
  5. Harremoës, P.; Tusnády, G. (2012). "Information divergence is more chi squared distributed than the chi squared statistic". Proceedings ISIT 2012. pp. 538–543. arXiv: 1202.1125 . Bibcode:2012arXiv1202.1125H.
  6. Quine, M. P.; Robinson, J. (1985). "Efficiencies of chi-square and likelihood ratio goodness-of-fit tests". Annals of Statistics . 13 (2): 727–742. doi: 10.1214/aos/1176349550 .
  7. Harremoës, P.; Vajda, I. (2008). "On the Bahadur-efficient testing of uniformity by means of the entropy". IEEE Transactions on Information Theory . 54: 321–331. CiteSeerX   10.1.1.226.8051 . doi:10.1109/tit.2007.911155.
  8. Dunning, Ted (1993). "Accurate Methods for the Statistics of Surprise and Coincidence Archived 2011-12-15 at the Wayback Machine ", Computational Linguistics , Volume 19, issue 1 (March, 1993).
  9. Fisher, R. A. (1929). "Tests of significance in harmonic analysis". Proceedings of the Royal Society of London A. 125 (796): 54–59. Bibcode:1929RSPSA.125...54F. doi: 10.1098/rspa.1929.0151 .
  10. G-test of independence, G-test for goodness-of-fit in Handbook of Biological Statistics, University of Delaware. (pp. 46–51, 64–69 in: McDonald, J. H. (2009) Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)
  11. org.apache.commons.math3.stat.inference.GTest
  12. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.power_divergence.html#scipy.stats.power_divergence