Estimation statistics

Last updated

Estimation statistics, or simply estimation, is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results. [1] It complements hypothesis testing approaches such as null hypothesis significance testing (NHST), by going beyond the question is an effect present or not, and provides information about how large an effect is. [2] [3] Estimation statistics is sometimes referred to as the new statistics. [3] [4] [5]

Contents

The primary aim of estimation methods is to report an effect size (a point estimate) along with its confidence interval, the latter of which is related to the precision of the estimate. [6] The confidence interval summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a P value as an unhelpful distraction from the important business of reporting an effect size with its confidence intervals, [7] and believe that estimation should replace significance testing for data analysis. [8] [9]

History

Starting in 1929, physicist Raymond Thayer Birge published review papers [10] in which he used weighted-averages methods to calculate estimates of physical constants, a procedure that can be seen as the precursor to modern meta-analysis. [11]

In the 1960s, estimation statistics was adopted by the non-physical sciences with the development of the standardized effect size by Jacob Cohen.

In the 1970s, modern research synthesis was pioneered by Gene V. Glass with the first systematic review and meta-analysis for psychotherapy. [12] This pioneering work subsequently influenced the adoption of meta-analyses for medical treatments more generally.

In the 1980s and 1990s, estimation methods were extended and refined by biostatisticians including Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, and many others, with the development of the modern (medical) meta-analysis.

Starting in the 1980s, the systematic review, used in conjunction with meta-analysis, became a technique widely used in medical research. There are over 200,000 citations to "meta-analysis" in PubMed.

In the 1990s, editor Kenneth Rothman banned the use of p-values from the journal Epidemiology; compliance was high among authors but this did not substantially change their analytical thinking. [13]

In the 2010s, Geoff Cumming published a textbook dedicated to estimation statistics, along with software in Excel designed to teach effect-size thinking, primarily to psychologists. [14] Also in the 2010s, estimation methods were increasingly adopted in neuroscience. [15] [16]

In 2013, the Publication Manual of the American Psychological Association recommended to use estimation in addition to hypothesis testing. [17] Also in 2013, the Uniform Requirements for Manuscripts Submitted to Biomedical Journals document made a similar recommendation: "Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size." [18]

In 2019, over 800 scientists signed an open comment calling for the entire concept of statistical significance to be abandoned. [19]

In 2019, the Society for Neuroscience journal eNeuro instituted a policy recommending the use of estimation graphics as the preferred method for data presentation. [20] And in 2022, the International Society of Physiotherapy Journal Editors recommended the use of estimation methods instead of null hypothesis statistical tests. [21]

Despite the widespread adoption of meta-analysis for clinical research, and recommendations by several major publishing institutions, the estimation framework is not routinely used in primary biomedical research. [22]

Methodology

Many significance tests have an estimation counterpart; [23] in almost every case, the test result (or its p-value) can be simply substituted with the effect size and a precision estimate. For example, instead of using Student's t-test, the analyst can compare two independent groups by calculating the mean difference and its 95% confidence interval. Corresponding methods can be used for a paired t-test and multiple comparisons. Similarly, for a regression analysis, an analyst would report the coefficient of determination (R2) and the model equation instead of the model's p-value.

However, proponents of estimation statistics warn against reporting only a few numbers. Rather, it is advised to analyze and present data using data visualization. [2] [5] [6] Examples of appropriate visualizations include the scatter plot for regression, and Gardner–Altman plots for two independent groups. [24] While historical data-group plots (bar charts, box plots, and violin plots) do not display the comparison, estimation plots add a second axis to explicitly visualize the effect size. [25]

The Gardner-Altman plot. Left: A conventional bar chart, using asterisks to show that the difference is 'statistically significant.' Right: A Gardner-Altman plot that shows all data points, along with the mean difference and its confidence intervals. 20171231-wiki-figure-png.png
The Gardner–Altman plot. Left: A conventional bar chart, using asterisks to show that the difference is 'statistically significant.' Right: A Gardner–Altman plot that shows all data points, along with the mean difference and its confidence intervals.

Gardner–Altman plot

The Gardner–Altman mean difference plot was first described by Martin Gardner and Doug Altman in 1986; [24] it is a statistical graph designed to display data from two independent groups. [5] There is also a version suitable for paired data. The key instructions to make this chart are as follows: (1) display all observed values for both groups side-by-side; (2) place a second axis on the right, shifted to show the mean difference scale; and (3) plot the mean difference with its confidence interval as a marker with error bars. [3] Gardner-Altman plots can be generated with DABEST-Python, or dabestr; alternatively, the analyst can use GUI software like the Estimation Stats app.

The Cumming plot. A Cumming plot as rendered by the EstimationStats web application. In the top panel, all observed values are shown. The effect sizes, sampling distribution, and 95% confidence intervals are plotted on a separate axes beneath the raw data. For each group, summary measurements (mean +- standard deviation) are drawn as gapped lines. Cumming Estimation Plot.png
The Cumming plot. A Cumming plot as rendered by the EstimationStats web application. In the top panel, all observed values are shown. The effect sizes, sampling distribution, and 95% confidence intervals are plotted on a separate axes beneath the raw data. For each group, summary measurements (mean ± standard deviation) are drawn as gapped lines.

Cumming plot

For multiple groups, Geoff Cumming introduced the use of a secondary panel to plot two or more mean differences and their confidence intervals, placed below the observed values panel; [3] this arrangement enables easy comparison of mean differences ('deltas') over several data groupings. Cumming plots can be generated with the ESCI package, DABEST, or the Estimation Stats app.

Other methodologies

In addition to the mean difference, there are numerous other effect size types, all with relative benefits. Major types include effect sizes in the Cohen's d class of standardized metrics, and the coefficient of determination (R2) for regression analysis. For non-normal distributions, there are a number of more robust effect sizes, including Cliff's delta and the Kolmogorov-Smirnov statistic.

Flaws in hypothesis testing

In hypothesis testing, the primary objective of statistical calculations is to obtain a p-value, the probability of seeing an obtained result, or a more extreme result, when assuming the null hypothesis is true. If the p-value is low (usually < 0.05), the statistical practitioner is then encouraged to reject the null hypothesis. Proponents of estimation reject the validity of hypothesis testing [3] [6] for the following reasons, among others:

Benefits of estimation statistics

Quantification

While p-values focus on yes/no answers, estimation directs the analyst's attention to quantification.

Advantages of confidence intervals

Confidence intervals behave in a predictable way. By definition, 95% confidence intervals have a 95% chance of covering the underlying population mean (μ). This feature remains constant with increasing sample size; what changes is that the interval becomes smaller. In addition, 95% confidence intervals are also 83% prediction intervals: one (pre experimental) confidence interval has an 83% chance of covering any future experiment's mean. [3] As such, knowing a single experiment's 95% confidence intervals gives the analyst a reasonable range for the population mean. Nevertheless, confidence distributions and posterior distributions provide a whole lot more information than a single point estimate or intervals, [30] that can exacerbate dichotomous thinking according to the interval covering or not covering a "null" value of interest (i.e. the Inductive behavior of Neyman as opposed to that of Fisher [31] ).

Evidence-based statistics

Psychological studies of the perception of statistics reveal that reporting interval estimates leaves a more accurate perception of the data than reporting p-values. [32]

Precision planning

The precision of an estimate is formally defined as 1/variance, and like power, increases (improves) with increasing sample size. Like power, a high level of precision is expensive; research grant applications would ideally include precision/cost analyses. Proponents of estimation believe precision planning should replace power since statistical power itself is conceptually linked to significance testing. [3] Precision planning can be done with the ESCI web app.

See also

Related Research Articles

Biostatistics is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

<span class="mw-page-title-main">Statistical hypothesis test</span> Method of statistical inference

A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently support a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p-value computed from the test statistic. Roughly 100 specialized statistical tests have been defined.

<span class="mw-page-title-main">Meta-analysis</span> Statistical method that summarizes data from multiple sources

Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing an effect size across all of the studies; this involves extracting effect sizes and variance measures from various studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works.

In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by , is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true; and the p-value of a result, , is the probability of obtaining a result at least as extreme, given that the null hypothesis is true. The result is statistically significant, by the standards of the study, when . The significance level for a study is chosen before data collection, and is typically set to 5% or much lower—depending on the field of study.

In scientific research, the null hypothesis is the claim that the effect being studied does not exist. Note that the term "effect" here is not meant to imply a causative relationship.

In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by , and represents the chances of a true positive detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.

<span class="mw-page-title-main">Confidence interval</span> Range to estimate an unknown parameter

Informally, in frequentist statistics, a confidence interval (CI) is an interval which is expected to typically contain the parameter being estimated. More specifically, given a confidence level , a CI is a random interval which contains the parameter being estimated % of the time. The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.

In null-hypothesis significance testing, the -value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience. In 2016, the American Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis." That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data."

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

A permutation test is an exact statistical hypothesis test making use of the proof by contradiction. A permutation test involves two or more samples. The null hypothesis is that all samples come from the same distribution . Under the null hypothesis, the distribution of the test statistic is obtained by calculating all possible values of the test statistic under possible rearrangements of the observed data. Permutation tests are, therefore, a form of resampling.

<span class="mw-page-title-main">Fisher's method</span> Statistical method

In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combine the results from several independence tests bearing upon the same overall hypothesis (H0).

<span class="mw-page-title-main">Multiple comparisons problem</span> Statistical interpretation with many tests

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values.

Medical statistics deals with applications of statistics to medicine and the health sciences, including epidemiology, public health, forensic medicine, and clinical research. Medical statistics has been a recognized branch of statistics in the United Kingdom for more than 40 years, but the term has not come into general use in North America, where the wider term 'biostatistics' is more commonly used. However, "biostatistics" more commonly connotes all applications of statistics to biology. Medical statistics is a subdiscipline of statistics.

It is the science of summarizing, collecting, presenting and interpreting data in medical practice, and using them to estimate the magnitude of associations and test hypotheses. It has a central role in medical investigations. It not only provides a way of organizing information on a wider and more formal basis than relying on the exchange of anecdotes and personal experience, but also takes into account the intrinsic variation inherent in most biological processes.

In statistics, and especially in the statistical analysis of psychological data, the counternull is a statistic used to aid the understanding and presentation of research results. It revolves around the effect size, which is the mean magnitude of some effect divided by the standard deviation.

In medicine and psychology, clinical significance is the practical importance of a treatment effect—whether it has a real genuine, palpable, noticeable effect on daily life.

Misuse of p-values is common in scientific research and scientific education. p-values are often used or interpreted incorrectly; the American Statistical Association states that p-values can indicate how incompatible the data are with a specified statistical model. From a Neyman–Pearson hypothesis testing approach to statistical inferences, the data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level. From a Fisherian statistical testing approach to statistical inferences, a low p-value means either that the null hypothesis is true and a highly improbable event has occurred or that the null hypothesis is false.

In statistics, dichotomous thinking or binary thinking is the process of seeing a discontinuity in the possible values that a p-value can take during null hypothesis significance testing: it is either above the significance threshold or below. When applying dichotomous thinking, a first p-value of 0.0499 will be interpreted the same as a p-value of 0.0001 while a second p-value of 0.0501 will be interpreted the same as a p-value of 0.7. The fact that first and second p-values are mathematically very close is thus completely disregarded and values of p are not considered as continuous but are interpreted dichotomously with respect to the significance threshold. A common measure of dichotomous thinking is the cliff effect. A reason to avoid dichotomous thinking is that p-values and other statistics naturally change from study to study due to random variation alone; decisions about refutation or support of a scientific hypothesis based on a result from a single study are therefore not reliable.

References

  1. Ellis, Paul. "Effect size FAQ".
  2. 1 2 Cohen, Jacob. "The earth is round (p<.05)" (PDF). Archived from the original (PDF) on 2017-10-11. Retrieved 2013-08-22.
  3. 1 2 3 4 5 6 7 Cumming, Geoff (2011). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge. ISBN   978-0415879675.[ page needed ]
  4. Altman, Douglas (1991). Practical Statistics For Medical Research . London: Chapman and Hall.
  5. 1 2 3 Douglas Altman, ed. (2000). Statistics with Confidence. London: Wiley-Blackwell.[ page needed ]
  6. 1 2 3 Cohen, Jacob (1990). "Things I have learned (so far)". American Psychologist. 45 (12): 1304–1312. doi:10.1037/0003-066x.45.12.1304.
  7. Ellis, Paul (2010-05-31). "Why can't I just judge my result by looking at the p value?" . Retrieved 5 June 2013.
  8. Claridge-Chang, Adam; Assam, Pryseley N (2016). "Estimation statistics should replace significance testing". Nature Methods. 13 (2): 108–109. doi:10.1038/nmeth.3729. PMID   26820542. S2CID   205424566.
  9. Berner, Daniel; Amrhein, Valentin (2022). "Why and how we should join the shift from significance testing to estimation". Journal of Evolutionary Biology. 35 (6): 777–787. doi:10.1111/jeb.14009. ISSN   1010-061X. PMC   9322409 . PMID   35582935. S2CID   247788899.
  10. Birge, Raymond T. (1929). "Probable Values of the General Physical Constants". Reviews of Modern Physics. 1 (1): 1–73. Bibcode:1929RvMP....1....1B. doi:10.1103/RevModPhys.1.1.
  11. Hedges, Larry (1987). "How hard is hard science, how soft is soft science". American Psychologist. 42 (5): 443. CiteSeerX   10.1.1.408.2317 . doi:10.1037/0003-066x.42.5.443.
  12. Hunt, Morton (1997). How science takes stock: the story of meta-analysis. New York: The Russell Sage Foundation. ISBN   978-0-87154-398-1.
  13. Fidler, Fiona; Thomason, Neil; Cumming, Geoff; Finch, Sue; Leeman, Joanna (2004). "Editors Can Lead Researchers to Confidence Intervals, but Can't Make Them Think: Statistical Reform Lessons From Medicine". Psychological Science. 15 (2): 119–126. doi:10.1111/j.0963-7214.2004.01502008.x. PMID   14738519. S2CID   21199094.
  14. Cumming, Geoff. "ESCI (Exploratory Software for Confidence Intervals)". Archived from the original on 2013-12-29. Retrieved 2013-05-12.
  15. Yildizoglu, Tugce; Weislogel, Jan-Marek; Mohammad, Farhan; Chan, Edwin S.-Y.; Assam, Pryseley N.; Claridge-Chang, Adam (2015). "Estimating Information Processing in a Memory System: The Utility of Meta-analytic Methods for Genetics". PLOS Genetics. 11 (12): e1005718. doi: 10.1371/journal.pgen.1005718 . PMC   4672901 . PMID   26647168.
  16. Hentschke, Harald; Maik C. Stüttgen (2011). "Computation of measures of effect size for neuroscience data sets". European Journal of Neuroscience. 34 (12): 1887–1894. doi:10.1111/j.1460-9568.2011.07902.x. PMID   22082031. S2CID   12505606.
  17. "Publication Manual of the American Psychological Association, Sixth Edition". Archived from the original on 2013-03-05.
  18. "Uniform Requirements for Manuscripts Submitted to Biomedical Journals". Archived from the original on 15 May 2013.
  19. Amrhein, Valentin; Greenland, Sander; McShane, Blake (2019). "Scientists rise up against statistical significance", Nature 567, 305-307.
  20. Bernard, Christophe (2019). "Changing the Way We Report, Interpret, and Discuss Our Results to Rebuild Trust in Our Research". eNeuro. 6 (4). doi:10.1523/ENEURO.0259-19.2019. PMC   6709206 . PMID   31453315.
  21. Elkins, Mark; et al. (2022). "Statistical inference through estimation: recommendations from the International Society of Physiotherapy Journal Editors", Journal of Physiotherapy, 68 (1), 1-4.
  22. Halsey, Lewis G. (2019). "The reign of the p -value is over: what alternative analyses could we employ to fill the power vacuum?". Biology Letters. 15 (5): 20190174. doi:10.1098/rsbl.2019.0174. PMC   6548726 . PMID   31113309.
  23. Cumming, Geoff; Calin-Jageman, Robert (2016). Introduction to the New Statistics: Estimation, Open Science, and Beyond. Routledge. ISBN   978-1138825529.[ page needed ]
  24. 1 2 Gardner, M J; Altman, D G (1986). "Confidence intervals rather than P values: estimation rather than hypothesis testing". BMJ. 292 (6522): 746–750. doi:10.1136/bmj.292.6522.746. PMC   1339793 . PMID   3082422.
  25. Ho, Joses; Tumkaya, Tayfun; Aryal, Sameer; Choi, Hyungwon; Claridge-Chang, Adam (2018). "Moving beyond P values: Everyday data analysis with estimation plots". doi: 10.1101/377978 .{{cite journal}}: Cite journal requires |journal= (help)
  26. Cohen, Jacob (1994). "The earth is round (p < .05)". American Psychologist. 49 (12): 997–1003. doi:10.1037/0003-066X.49.12.997.
  27. Ellis, Paul (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge: Cambridge University Press.[ page needed ]
  28. Denton E. Morrison, Ramon E. Henkel, ed. (2006). The Significance Test Controversy: A Reader. Aldine Transaction. ISBN   978-0202308791.[ page needed ]
  29. Cumming, Geoff. "Dance of the p values". YouTube .
  30. Xie, Min-ge; Singh, Kesar (2013). "Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review". International Statistical Review. 81 (1): 3–39. doi:10.1111/insr.12000. JSTOR   43298799. S2CID   3242459.
  31. Halpin, Peter F.; Stam, Henderikus J. (2006). "Inductive Inference or Inductive Behavior: Fisher and Neyman: Pearson Approaches to Statistical Testing in Psychological Research (1940-1960)". The American Journal of Psychology. 119 (4): 625–653. doi:10.2307/20445367. JSTOR   20445367. PMID   17286092.
  32. Beyth-Marom, Ruth; Fidler, Fiona Margaret; Cumming, Geoffrey David (2008). "Statistical cognition: Towards evidence-based practice in statistics and statistics education". Statistics Education Research Journal. 7 (2): 20–39. CiteSeerX   10.1.1.154.7648 . doi:10.52041/serj.v7i2.468. S2CID   18902043.