Bonferroni correction

Last updated

In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem.

Contents

Background

The method is named for its use of the Bonferroni inequalities. [1] Application of the method to confidence intervals was described by Olive Jean Dunn. [2]

Statistical hypothesis testing is based on rejecting the null hypothesis when the likelihood of the observed data would be low if the null hypothesis were true. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. [3]

The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level of , where is the desired overall alpha level and is the number of hypotheses. [4] For example, if a trial is testing hypotheses with a desired overall , then the Bonferroni correction would test each individual hypothesis at . Similarly, when constructing confidence intervals for parameters, each individual confidence interval can be computed at the confidence level to achieve an overall confidence level of .

The Bonferroni correction can also be applied as a p-value adjustment: Using that approach, instead of adjusting the alpha level, each p-value is multiplied by the number of tests (with adjusted p-values that exceed 1 then being reduced to 1), and the alpha level is left unchanged. The significance decisions using this approach will be the same as when using the alpha-level adjustment approach.

Definition

Let be a family of null hypotheses and let be their corresponding p-values. Let be the total number of null hypotheses, and let be the number of true null hypotheses (which is presumably unknown to the researcher). The family-wise error rate (FWER) is the probability of rejecting at least one true , that is, of making at least one type I error. The Bonferroni correction rejects the null hypothesis for each , thereby controlling the FWER at . Proof of this control follows from Boole's inequality, as follows:

This control does not require any assumptions about dependence among the p-values or about how many of the null hypotheses are true. [5]

Extensions

Generalization

Rather than testing each hypothesis at the level, the hypotheses may be tested at any other combination of levels that add up to , provided that the level of each test is decided before looking at the data. [6] For example, for two hypothesis tests, an overall of 0.05 could be maintained by conducting one test at 0.04 and the other at 0.01.

Confidence intervals

The procedure proposed by Dunn [2] can be used to adjust confidence intervals. If one establishes confidence intervals, and wishes to have an overall confidence level of , each individual confidence interval can be adjusted to the level of . [2]

Continuous problems

When searching for a signal in a continuous parameter space there can also be a problem of multiple comparisons, or look-elsewhere effect. For example, a physicist might be looking to discover a particle of unknown mass by considering a large range of masses; this was the case during the Nobel Prize winning detection of the Higgs boson. In such cases, one can apply a continuous generalization of the Bonferroni correction by employing Bayesian logic to relate the effective number of trials, , to the prior-to-posterior volume ratio. [7]

Alternatives

There are alternative ways to control the family-wise error rate. For example, the Holm–Bonferroni method and the Šidák correction are universally more powerful procedures than the Bonferroni correction, meaning that they are always at least as powerful. But unlike the Bonferroni procedure, these methods do not control the expected number of Type I errors per family (the per-family Type I error rate). [8]

Criticism

With respect to FWER control, the Bonferroni correction can be conservative if there are a large number of tests and/or the test statistics are positively correlated. [9]

The correction comes at the cost of increasing the probability of producing false negatives, i.e., reducing statistical power. [10] [9] There is not a definitive consensus on how to define a family in all cases, and adjusted test results may vary depending on the number of tests included in the family of hypotheses.[ citation needed ] Such criticisms apply to FWER control in general, and are not specific to the Bonferroni correction.

Related Research Articles

In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by , and represents the chances of a true positive detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.

In null-hypothesis significance testing, the -value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience. In 2016, the American Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis". That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".

<span class="mw-page-title-main">Kruskal–Wallis test</span> Non-parametric method for testing whether samples originate from the same distribution

The Kruskal–Wallis test by ranks, Kruskal–Wallis test, or one-way ANOVA on ranks is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann–Whitney U test, which is used for comparing only two groups. The parametric equivalent of the Kruskal–Wallis test is the one-way analysis of variance (ANOVA).

The Wilcoxon signed-rank test is a non-parametric rank test for statistical hypothesis testing used either to test the location of a population based on a sample of data, or to compare the locations of two populations using two matched samples. The one-sample version serves a purpose similar to that of the one-sample Student's t-test. For two matched samples, it is a paired difference test like the paired Student's t-test. The Wilcoxon test can be a good alternative to the t-test when population means are not of interest; for example, when one wishes to test whether a population's median is nonzero, or whether there is a better than 50% chance that a sample from one population is greater than a sample from another population.

A permutation test is an exact statistical hypothesis test making use of the proof by contradiction. A permutation test involves two or more samples. The null hypothesis is that all samples come from the same distribution . Under the null hypothesis, the distribution of the test statistic is obtained by calculating all possible values of the test statistic under possible rearrangements of the observed data. Permutation tests are, therefore, a form of resampling.

In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments. In other words, a binomial proportion confidence interval is an interval estimate of a success probability when only the number of experiments and the number of successes are known.

In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" that are false. Equivalently, the FDR is the expected ratio of the number of false positive classifications to the total number of positive classifications. The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP /. FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures, which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors.

In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests.

<span class="mw-page-title-main">Multiple comparisons problem</span> Statistical interpretation with many tests

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values.

In statistics, the Holm–Bonferroni method, also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of multiple comparisons. It is intended to control the family-wise error rate (FWER) and offers a simple test uniformly more powerful than the Bonferroni correction. It is named after Sture Holm, who codified the method, and Carlo Emilio Bonferroni.

In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.

In statistics, when performing multiple comparisons, a false positive ratio is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events.

The Newman–Keuls or Student–Newman–Keuls (SNK)method is a stepwise multiple comparisons procedure used to identify sample means that are significantly different from each other. It was named after Student (1927), D. Newman, and M. Keuls. This procedure is often used as a post-hoc test whenever a significant difference between three or more sample means has been revealed by an analysis of variance (ANOVA). The Newman–Keuls method is similar to Tukey's range test as both procedures use studentized range statistics. Unlike Tukey's range test, the Newman–Keuls method uses different critical values for different pairs of mean comparisons. Thus, the procedure is more likely to reveal significant differences between group means and to commit type I errors by incorrectly rejecting a null hypothesis when it is true. In other words, the Neuman-Keuls procedure is more powerful but less conservative than Tukey's range test.

In particle physics, CLs represents a statistical method for setting upper limits on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.

In statistics, a false coverage rate (FCR) is the average rate of false coverage, i.e. not covering the true parameters, among the selected intervals.

In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method provides familywise error control that is exact for tests that are stochastically independent, conservative for tests that are positively dependent, and liberal for tests that are negatively dependent. It is credited to a 1967 paper by the statistician and probabilist Zbyněk Šidák. The Šidák method can be used to determine the statistical significance, and evaluate adjusted P value and confidence intervals.

One of the application of Student's t-test is to test the location of one sequence of independent and identically distributed random variables. If we want to test the locations of multiple sequences of such variables, Šidák correction should be applied in order to calibrate the level of the Student's t-test. Moreover, if we want to test the locations of nearly infinitely many sequences of variables, then Šidák correction should be used, but with caution. More specifically, the validity of Šidák correction depends on how fast the number of sequences goes to infinity.

In statistics, almost sure hypothesis testing or a.s. hypothesis testing utilizes almost sure convergence in order to determine the validity of a statistical hypothesis with probability one. This is to say that whenever the null hypothesis is true, then an a.s. hypothesis test will fail to reject the null hypothesis w.p. 1 for all sufficiently large samples. Similarly, whenever the alternative hypothesis is true, then an a.s. hypothesis test will reject the null hypothesis with probability one, for all sufficiently large samples. Along similar lines, an a.s. confidence interval eventually contains the parameter of interest with probability 1. Dembo and Peres (1994) proved the existence of almost sure hypothesis tests.

<i>q</i>-value (statistics) Statistical hypothesis testing measure

In statistical hypothesis testing, specifically multiple hypothesis testing, the q-value in the Storey-Tibshirani procedure provides a means to control the positive false discovery rate (pFDR). Just as the p-value gives the expected false positive rate obtained by rejecting the null hypothesis for any result with an equal or smaller p-value, the q-value gives the expected pFDR obtained by rejecting the null hypothesis for any result with an equal or smaller q-value.

The harmonic mean p-value(HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate (this claim has been disputed). It improves on the power of Bonferroni correction by performing combined tests, i.e. by testing whether groups of p-values are statistically significant, like Fisher's method. However, it avoids the restrictive assumption that the p-values are independent, unlike Fisher's method. Consequently, it controls the false positive rate when tests are dependent, at the expense of less power (i.e. a higher false negative rate) when tests are independent. Besides providing an alternative to approaches such as Bonferroni correction that controls the stringent family-wise error rate, it also provides an alternative to the widely-used Benjamini-Hochberg procedure (BH) for controlling the less-stringent false discovery rate. This is because the power of the HMP to detect significant groups of hypotheses is greater than the power of BH to detect significant individual hypotheses.

References

  1. Bonferroni, C. E., Teoria statistica delle classi e calcolo delle probabilità, Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze 1936
  2. 1 2 3 Dunn, Olive Jean (1961). "Multiple Comparisons Among Means" (PDF). Journal of the American Statistical Association . 56 (293): 52–64. CiteSeerX   10.1.1.309.1277 . doi:10.1080/01621459.1961.10482090.
  3. Mittelhammer, Ron C.; Judge, George G.; Miller, Douglas J. (2000). Econometric Foundations. Cambridge University Press. pp. 73–74. ISBN   978-0-521-62394-0.
  4. Miller, Rupert G. (1966). Simultaneous Statistical Inference. Springer. ISBN   9781461381228.
  5. Goeman, Jelle J.; Solari, Aldo (2014). "Multiple Hypothesis Testing in Genomics". Statistics in Medicine . 33 (11): 1946–1978. doi:10.1002/sim.6082. PMID   24399688. S2CID   22086583.
  6. Neuwald, AF; Green, P (1994). "Detecting patterns in protein sequences". J. Mol. Biol. 239 (5): 698–712. doi:10.1006/jmbi.1994.1407. PMID   8014990.
  7. Bayer, Adrian E.; Seljak, Uroš (2020). "The look-elsewhere effect from a unified Bayesian and frequentist perspective". Journal of Cosmology and Astroparticle Physics . 2020 (10): 009. arXiv: 2007.13821 . doi:10.1088/1475-7516/2020/10/009. S2CID   220830693.
  8. Frane, Andrew (2015). "Are per-family Type I error rates relevant in social and behavioral science?". Journal of Modern Applied Statistical Methods. 14 (1): 12–23. doi: 10.22237/jmasm/1430453040 .
  9. 1 2 Moran, Matthew (2003). "Arguments for rejecting the sequential Bonferroni in ecological studies". Oikos . 100 (2): 403–405. doi:10.1034/j.1600-0706.2003.12010.x.
  10. Nakagawa, Shinichi (2004). "A farewell to Bonferroni: the problems of low statistical power and publication bias". Behavioral Ecology . 15 (6): 1044–1045. doi: 10.1093/beheco/arh107 .