Look-elsewhere effect

Last updated

The look-elsewhere effect is a phenomenon in the statistical analysis of scientific experiments where an apparently statistically significant observation may have actually arisen by chance because of the sheer size of the parameter space to be searched. [1] [2] [3] [4] [5]

Contents

Once the possibility of look-elsewhere error in an analysis is acknowledged, it can be compensated for by careful application of standard mathematical techniques. [6] [7] [8]

More generally known in statistics as the problem of multiple comparisons, the term gained some media attention in 2011, in the context of the search for the Higgs boson at the Large Hadron Collider. [9]

Use

Many statistical tests deliver a p-value, the probability that a given result could be obtained by chance, assuming the hypothesis one seeks to prove is in fact false. When asking "does X affect Y?", it is common to vary X and see if there is significant variation in Y as a result. If this p-value is less than some predetermined statistical significance threshold α, one considers the result "significant".

However, if one is performing multiple tests ("looking elsewhere" if the first test fails) then a p value of 1/n is expected to occur once per n tests. For example, when there is no real effect, an event with p < 0.05 will still occur once, on average, for each 20 tests performed. In order to compensate for this, you could divide your threshold α by the number of tests n, so a result is significant when p<α/n. Or, equivalently, multiply the observed p value by the number of tests (significant when np<α).

This is a simplified case; the number n is actually the number of degrees of freedom in the tests, or the number of effectively independent tests. If they are not fully independent, the number may be lower than the number of tests.

The look-elsewhere effect is a frequent cause of "significance inflation" when the number of independent tests n is underestimated because failed tests are not published. One paper may fail to mention alternative hypotheses considered, or a paper producing no result may simply not be published at all, leading to journals dominated by statistical outliers.

Examples

See also

Related Research Articles

Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.

<span class="mw-page-title-main">Statistical hypothesis test</span> Method of statistical inference

A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently support a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p-value computed from the test statistic. Roughly 100 specialized statistical tests have been defined.

<span class="mw-page-title-main">Fine-structure constant</span> Dimensionless number that quantifies the strength of the electromagnetic interaction

In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by α, is a fundamental physical constant which quantifies the strength of the electromagnetic interaction between elementary charged particles.

In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by , is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true; and the p-value of a result, , is the probability of obtaining a result at least as extreme, given that the null hypothesis is true. The result is statistically significant, by the standards of the study, when . The significance level for a study is chosen before data collection, and is typically set to 5% or much lower—depending on the field of study.

In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by , and represents the chances of a true positive detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.

The Bible code, also known as the Torah code, is a purported set of encoded words within a Hebrew text of the Torah that, according to proponents, has predicted significant historical events. The statistical likelihood of the Bible code arising by chance has been thoroughly researched, and it is now widely considered to be statistically insignificant, as similar phenomena can be observed in any sufficiently lengthy text. Although Bible codes have been postulated and studied for centuries, the subject has been popularized in modern times by Michael Drosnin's book The Bible Code (1997) and the movie The Omega Code (1999).

Mann–Whitney test is a nonparametric test of the null hypothesis that, for randomly selected values X and Y from two populations, the probability of X being greater than Y is equal to the probability of Y being greater than X.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

Student's t-test is a statistical test used to test whether the difference between the response of two groups is statistically significant or not. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. The t-test's most common application is to test whether the means of two populations are significantly different. In many cases, a Z-test will yield very similar results to a t-test since the latter converges to the former as the size of the dataset increases.

In null-hypothesis significance testing, the -value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience. In 2016, the American Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis." That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data."

<span class="mw-page-title-main">Data dredging</span> Misuse of data analysis

Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.

A permutation test is an exact statistical hypothesis test making use of the proof by contradiction. A permutation test involves two or more samples. The null hypothesis is that all samples come from the same distribution . Under the null hypothesis, the distribution of the test statistic is obtained by calculating all possible values of the test statistic under possible rearrangements of the observed data. Permutation tests are, therefore, a form of resampling.

In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" that are false. Equivalently, the FDR is the expected ratio of the number of false positive classifications to the total number of positive classifications. The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP /. FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures, which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors.

Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance, in a model with two independent variables, if only one variable exerts a significant effect on the dependent variable and the other does not, then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test, researchers often use contrasts.

In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem.

<span class="mw-page-title-main">Multiple comparisons problem</span> Statistical interpretation with many tests

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values.

Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSDtest, is a single-step multiple comparison procedure and statistical test. It can be used to correctly interpret the statistical significance of the difference between means that have been selected for comparison because of their extreme values.

In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method provides familywise error control that is exact for tests that are stochastically independent, conservative for tests that are positively dependent, and liberal for tests that are negatively dependent. It is credited to a 1967 paper by the statistician and probabilist Zbyněk Šidák. The Šidák method can be used to determine the statistical significance, and evaluate adjusted P value and confidence intervals.

A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition, while a false negative is the opposite error, where the test result incorrectly indicates the absence of a condition when it is actually present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result. They are also known in medicine as a false positivediagnosis, and in statistical classification as a false positiveerror.

Misuse of p-values is common in scientific research and scientific education. p-values are often used or interpreted incorrectly; the American Statistical Association states that p-values can indicate how incompatible the data are with a specified statistical model. From a Neyman–Pearson hypothesis testing approach to statistical inferences, the data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level. From a Fisherian statistical testing approach to statistical inferences, a low p-value means either that the null hypothesis is true and a highly improbable event has occurred or that the null hypothesis is false.

References

  1. Lyons, L. (2008). "Open statistical issues in Particle Physics". The Annals of Applied Statistics. 2 (3): 887. arXiv: 0811.1663 . doi:10.1214/08-AOAS163.
  2. "Synopsis: Controlling for the "look-elsewhere effect"". American Physical Society. 2011.
  3. Lori Ann White (August 12, 2011). "Word of the Week: Look Elsewhere Effect". Stanford National Accelerator Laboratory. Archived from the original on April 19, 2012.
  4. Dorigo, Tommaso (2009-10-16). "Supernatural Coincidences And The Look-Elsewhere Effect" . Retrieved 2012-10-17.
  5. Dorigo, Tommaso (2011-08-19). "Should you get excited by your data? Let the Look-Elsewhere Effect decide". CMS Collaboration.
  6. Gross, E.; Vitells, O. (2010). "Trial factors for the look elsewhere effect in high energy physics". The European Physical Journal C. 70: 525. arXiv: 1005.1891 . Bibcode:2010EPJC...70..525G. doi:10.1140/epjc/s10052-010-1470-8.
  7. Bayer, Adrian E.; Seljak, Uroš (2020). "The look-elsewhere effect from a unified Bayesian and frequentist perspective". Journal of Cosmology and Astroparticle Physics . 2020 (10): 009–009. arXiv: 2007.13821 . doi:10.1088/1475-7516/2020/10/009.
  8. Bayer, Adrian E.; Seljak, Uroš; Robnik, Jakob (2021). "Self-Calibrating the Look-Elsewhere Effect: Fast Evaluation of the Statistical Significance Using Peak Heights". Monthly Notices of the Royal Astronomical Society . 508 (1): 1346–1357. arXiv: 2108.06333 . doi:10.1093/mnras/stab2331.
  9. Tom Chivers (2011-12-13). "An unconfirmed sighting of the elusive Higgs boson". Daily Telegraph. Archived from the original on 2011-12-17.
  10. Palfreman, Jon (1995-06-13), "Currents of fear", Frontline , PBS , retrieved 2012-07-01
  11. Thomas, Dave (1997-11-01), "Hidden Messages and The Bible Code", Skeptical Inquirer , CSICOP , retrieved 2015-04-19