Bayes factor

Last updated

In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing. [1] Bayesian model comparison is a method of model selection based on Bayes factors. The models under consideration are statistical models. [2] The aim of the Bayes factor is to quantify the support for a model over another, regardless of whether these models are correct. [3] The technical definition of "support" in the context of Bayesian inference is described below.

Contents

Definition

The Bayes factor is a likelihood ratio of the marginal likelihood of two competing hypotheses, usually a null and an alternative. [4]

The posterior probability of a model M given data D is given by Bayes' theorem:

The key data-dependent term represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison.

Given a model selection problem in which we have to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors and , is assessed by the Bayes factorK given by

When the two models have equal prior probability, so that , the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). However, an advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure. [5] It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework, [6] with the caveat that approximate-Bayesian estimates of Bayes factors are often biased. [7]

Other approaches are:

Interpretation

A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K: [8]

style="text-align: center; margin-left: auto; margin-right: auto; border: none;"

KdHartbitsStrength of evidence
< 100< 0< 0Negative (supports M2)
100 to 101/20 to 50 to 1.6Barely worth mentioning
101/2 to 1015 to 101.6 to 3.3Substantial
101 to 103/210 to 153.3 to 5.0Strong
103/2 to 10215 to 205.0 to 6.6Very strong
> 102> 20> 6.6Decisive

The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use. [9]

An alternative table, widely cited, is provided by Kass and Raftery (1995): [5]

style="text-align: center; margin-left: auto; margin-right: auto; border: none;"

log10KKStrength of evidence
0 to 1/21 to 3.2Not worth more than a bare mention
1/2 to 13.2 to 10Substantial
1 to 210 to 100Strong
> 2> 100Decisive

Example

Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = ½, and another model M2 where q is unknown and we take a prior distribution for q that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:

Thus we have for M1

whereas for M2 we have

The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1.

A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that M1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = ½ is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.

A classical likelihood-ratio test would have found the maximum likelihood estimate for q, namely 115200 = 0.575, whence

(rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2.

M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors. [10]

On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its AIC value is 2·0  2·ln(0.005956) = 10.2467. Model M2 has 1 parameter, and so its AIC value is 2·1  2·ln(0.056991) = 7.7297. Hence M1 is about exp((7.7297  10.2467)/2) = 0.284 times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.

See also

Statistical ratios

Related Research Articles

In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

The likelihood function describes the joint probability of the observed data as a function of the parameters of the chosen statistical model. For each specific parameter value in the parameter space, the likelihood function therefore assigns a probabilistic prediction to the observed data . Since it is essentially the product of sampling densities, the likelihood generally encapsulates both the data-generating process as well as the missing-data mechanism that produced the observed sample.

In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after imposing some constraint. If the constraint is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

The statistical power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by , and represents the chances of a "true positive" detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.

In statistics, a confidence interval (CI) is a range of estimates for an unknown parameter, defined as an interval with a lower bound and an upper bound. The interval is computed at a designated confidence level. The 95% confidence level is most common, but other levels are sometimes used. The confidence level represents the long-run frequency of confidence intervals that contain the true value of the parameter. In other words, 95% of confidence intervals computed at the 95% confidence level contain the parameter, and likewise for other confidence levels.

In statistics, a marginal likelihood function, or integrated likelihood, is a likelihood function in which some parameter variables have been marginalized. In the context of Bayesian statistics, it may also be referred to as the evidence or model evidence.

In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance.

In statistics, the Wald test assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.

In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).

Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook; it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.

Monotone likelihood ratio

In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions ƒ(x) and g(x) bear the property if

Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.

Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist-inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded.

Bayesian econometrics is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation.

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial interpretation, although it is a purely frequentist concept. A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.

In particle physics, CLs represents a statistical method for setting upper limits on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.

In statistics, suppose that we have been given some data, and we are constructing a statistical model of that data. The relative likelihood compares the relative plausibilities of different candidate models or of different values of a parameter of a single model.

References

  1. Ly, Alexander; et al. (2020). "The Bayesian Methodology of Sir Harold Jeffreys as a Practical Alternative to the P Value Hypothesis Test". Computational Brain & Behavior. 3: 153–161. doi: 10.1007/s42113-019-00070-x .
  2. Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N. (2016). "The philosophy of Bayes factors and the quantification of statistical evidence". Journal of Mathematical Psychology. 72: 6–18. doi: 10.1016/j.jmp.2015.11.001 .
  3. Ly, Alexander; Verhagen, Josine; Wagenmakers, Eric-Jan (2016). "Harold Jeffreys's default Bayes factor hypothesis tests: Explanation, extension, and application in psychology" (PDF). Journal of Mathematical Psychology. 72: 19–32. doi:10.1016/j.jmp.2015.06.004.
  4. Good, Phillip; Hardin, James (July 23, 2012). Common errors in statistics (and how to avoid them) (4th ed.). Hoboken, New Jersey: John Wiley & Sons, Inc. pp. 129–131. ISBN   978-1118294390.
  5. 1 2 Robert E. Kass & Adrian E. Raftery (1995). "Bayes Factors" (PDF). Journal of the American Statistical Association. 90 (430): 791. doi:10.2307/2291091. JSTOR   2291091.
  6. Toni, T.; Stumpf, M.P.H. (2009). "Simulation-based model selection for dynamical systems in systems and population biology". Bioinformatics. 26 (1): 104–10. arXiv: 0911.1705 . doi:10.1093/bioinformatics/btp619. PMC   2796821 . PMID   19880371.
  7. Robert, C.P.; J. Cornuet; J. Marin & N.S. Pillai (2011). "Lack of confidence in approximate Bayesian computation model choice". Proceedings of the National Academy of Sciences. 108 (37): 15112–15117. Bibcode:2011PNAS..10815112R. doi: 10.1073/pnas.1102900108 . PMC   3174657 . PMID   21876135.
  8. Jeffreys, Harold (1998) [1961]. The Theory of Probability (3rd ed.). Oxford, England. p. 432. ISBN   9780191589676.
  9. Good, I.J. (1979). "Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II". Biometrika . 66 (2): 393–396. doi:10.1093/biomet/66.2.393. MR   0548210.
  10. Sharpening Ockham's Razor On a Bayesian Strop

Further reading