Funnel plot

Last updated
An example funnel plot showing no publication bias. Each dot represents a study (e.g. measuring the effect of a certain drug); the y-axis represents study precision (e.g. the Precision (statistics) (the inverse standard error) or number of experimental subjects) and the x-axis shows the study's result (e.g. the drug's measured average effect). Funnelplot.png
An example funnel plot showing no publication bias. Each dot represents a study (e.g. measuring the effect of a certain drug); the y-axis represents study precision (e.g. the Precision (statistics) (the inverse standard error) or number of experimental subjects) and the x-axis shows the study's result (e.g. the drug's measured average effect).

A funnel plot is a graph designed to check for the existence of publication bias; funnel plots are commonly used in systematic reviews and meta-analyses. In the absence of publication bias, it assumes that studies with high precision will be plotted near the average, and studies with low precision will be spread evenly on both sides of the average, creating a roughly funnel-shaped distribution. Deviation from this shape can indicate publication bias.

Contents

Quotation

Funnel plots, introduced by Light and Pillemer in 1984 [1] and discussed in detail by Matthias Egger and colleagues, [2] [3] are useful adjuncts to meta-analyses. A funnel plot is a scatterplot of treatment effect against a measure of study precision. It is used primarily as a visual aid for detecting bias or systematic heterogeneity. A symmetric inverted funnel shape arises from a ‘well-behaved’ data set, in which publication bias is unlikely. An asymmetric funnel indicates a relationship between treatment effect estimate and study precision. This suggests the possibility of either publication bias or a systematic difference between studies of higher and lower precision (typically ‘small study effects’). Asymmetry can also arise from use of an inappropriate effect measure. Whatever the cause, an asymmetric funnel plot leads to doubts over the appropriateness of a simple meta-analysis and suggests that there needs to be investigation of possible causes.

A variety of choices of measures of ‘study precision’ is available, including total sample size, standard error of the treatment effect, and inverse variance of the treatment effect (weight). Sterne and Egger have compared these with others, and conclude that the standard error is to be recommended. [3] When the standard error is used, straight lines may be drawn to define a region within which 95% of points might lie in the absence of both heterogeneity and publication bias. [3]

In common with confidence interval plots, funnel plots are conventionally drawn with the treatment effect measure on the horizontal axis, so that study precision appears on the vertical axis, breaking with the general rule. Since funnel plots are principally visual aids for detecting asymmetry along the treatment effect axis, this makes them considerably easier to interpret.

Criticism

The funnel plot is not without problems. If high precision studies are different from low precision studies with respect to effect size (e.g., due to different populations examined) a funnel plot may give a wrong impression of publication bias. [4] The appearance of the funnel plot can change quite dramatically depending on the scale on the y-axis whether it is the inverse square error or the trial size. [5] Researchers have a poor ability to visually discern publication bias from funnel plots. [6]

See also

Related Research Articles

Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.

Meta-analysis Statistical method that summarizes data from multiple sources

A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analyses can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived.

Placebo Substance or treatment of no therapeutic value

A placebo is a substance or treatment which is designed to have no therapeutic value. Common placebos include inert tablets, inert injections, sham surgery, and other procedures.

Cochrane (organisation) British nonprofit for reviews of medical research (formed 1993)

Cochrane is a British international charitable organisation formed to organise medical research findings to facilitate evidence-based choices about health interventions involving health professionals, patients and policy makers. It includes 53 review groups that are based at research institutions worldwide. Cochrane has approximately 30,000 volunteer experts from around the world.

In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.

Publication bias is a type of bias that occurs in published academic research. It occurs when the outcome of an experiment or research study influences the decision whether to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings, and inserts bias in favor of positive results. The study of publication bias is an important topic in metascience.

Systematic review Comprehensive review of research literature using systematic methods

Systematic reviews are a type of review that uses repeatable analytical methods to collect secondary data and analyse it. Systematic reviews are a type of evidence synthesis which formulate research questions that are broad or narrow in scope, and identify and synthesize data that directly relate to the systematic review question. While some people might associate 'systematic review' with 'meta-analysis', there are multiple kinds of review which can be defined as 'systematic' which do not involve a meta-analysis. Some systematic reviews critically appraise research studies, and synthesize findings qualitatively or quantitatively. Systematic reviews are often designed to provide an exhaustive summary of current evidence relevant to a research question. For example, systematic reviews of randomized controlled trials are an important way of informing evidence-based medicine, and a review of existing studies is often quicker and cheaper than embarking on a new study.

In statistics, (between-) study heterogeneity is a phenomenon that commonly occurs when attempting to undertake a meta-analysis. In a simplistic scenario, studies whose results are to be combined in the meta-analysis would all be undertaken in the same way and to the same experimental protocols. Differences between outcomes would only be due to measurement error. Study heterogeneity denotes the variability in outcomes that goes beyond what would be expected due to measurement error alone.

A hierarchy of evidence is a heuristic used to rank the relative strength of results obtained from scientific research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). Typically, systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank as the highest quality of evidence above observational studies, while expert opinion and anecdotal experience are at the bottom level of evidence quality. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects. In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.

Forest plot Graphical display of scientific results

A forest plot, also known as a blobbogram, is a graphical display of estimated results from a number of scientific studies addressing the same question, along with the overall results. It was developed for use in medical research as a means of graphically representing a meta-analysis of the results of randomized controlled trials. In the last twenty years, similar meta-analytical techniques have been applied in observational studies and forest plots are often used in presenting the results of such studies also.

In homeopathy, arsenicum album (Arsenic. alb.) is a solution prepared by diluting aqueous arsenic trioxide generally until there is little amounts of Arsenic remaining in individual doses. It is used by homeopaths in an attempt to treat a range of symptoms that include digestive disorders and, as an application of the Law of Similars, has been suggested by homeopathy as a treatment for arsenic poisoning. Since the arsenic oxide in a homeopathic preparation is normally non-existent, it is considered generally safe, although cases of arsenic poisoning from poorly prepared homeopathic treatments sold in India have been reported. When properly prepared, however, the extreme dilutions, typically to at least 1 in 1024, or 12C in homeopathic notation, mean that a pill would not contain even a molecule of the original arsenic used. While Anisur Khuda-Bukhsh's unblinded studies have claimed an effect on reducing arsenic toxicity, they do not recommend its large-scale use, and studies of homeopathic remedies have been shown to generally have problems that prevent them from being considered unambiguous evidence. There is no known mechanism for how arsenicum album could remove arsenic from a body, and there is insufficient evidence for it to be considered effective medicine (for any condition) by the scientific community.

Plot (graphics)

A plot is a graphical technique for representing a data set, usually as a graph showing the relationship between two or more variables. The plot can be drawn by hand or by a computer. In the past, sometimes mechanical or electronic plotters were used. Graphs are a visual representation of the relationship between variables, which are very useful for humans who can then quickly derive an understanding which may not have come from lists of values. Given a scale or ruler, graphs can also be used to read off the value of an unknown variable plotted as a function of a known one, but this can also be done with data presented in tabular form. Graphs of functions are used in mathematics, sciences, engineering, technology, finance, and other areas.

The Jadad scale, sometimes known as Jadad scoring or the Oxford quality scoring system, is a procedure to independently assess the methodological quality of a clinical trial. It is named after Colombian physician Alex Jadad who in 1996 described a system for allocating such trials a score of between zero and five (rigorous). It is the most widely used such assessment in the world, and as of 2019, its seminal paper has been cited in over 15,000 scientific works.

Critical appraisal is the use of explicit, transparent methods to assess the data in published research, applying the rules of evidence to factors such as internal validity, adherence to reporting standards, conclusions, generalizability and risk-of-bias. Critical appraisal methods form a central part of the systematic review process. They are used in evidence-based healthcare training to assist clinical decision-making, and are increasingly used in evidence-based social care and education provision.

Estimation statistics, or simply estimation, is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results. It is distinct from null hypothesis significance testing (NHST), which is considered to be less informative. Estimation statistics is also known as the new statistics in the fields of psychology, medical research, life sciences and other experimental sciences, where NHST still remains prevalent, despite contrary recommendations for several decades.

Matthias Egger is professor of epidemiology and public health at the University of Bern in Switzerland, as well as professor of clinical epidemiology at the University of Bristol in the United Kingdom.

Allegiance bias

Allegiance bias in behavioral sciences is a bias resulted from the investigator's or researcher's allegiance to a specific school of thought. Researchers/investigators have been exposed to many types of branches of psychology or schools of thought. Naturally they adopt a school or branch that fits with their paradigm of thinking. More specifically, allegiance bias is when this leads therapists, researchers, etc. believing that their school of thought or treatment is superior to others. Their superior belief to these certain schools of thought can bias their research in effective treatments trials or investigative situations leading to allegiance bias. Reason being is that they may have devoted their thinking to certain treatments they have seen work in their past experiences. This can lead to a fundamental errors to the results of their research they are conducting. Their “pledge” to stay within their own paradigm of thinking may affect their ability to find more effective treatments to help the patient or situation they are investigating.

Jonathan A.C. Sterne is a British statistician, NIHR Senior Investigator, Professor of Medical Statistics and Epidemiology, and the former Head of School of Social and Community Medicine at the University of Bristol. He is co-author of “Essential Medical Statistics”, which received Highly Commended honors in the 2004 BMA Medical Book Competition.

John B. Carlin is an Australian statistician. He is Head of Data Science and Director of the Clinical Epidemiology and Biostatistics Unit at the Murdoch Children's Research Institute (MCRI) and a professor in the Centre for Epidemiology and Biostatistics in the Melbourne School of Population and Global Health at the University of Melbourne. He has also led the Victorian Centre for Biostatistics, a collaboration between the MCRI, the University of Melbourne, and Monash University, since 2012. The economist Wendy Carlin is his sister.

References

  1. R. J. Light; D. B. Pillemer (1984). Summing up: The Science of Reviewing Research . Cambridge, Massachusetts.: Harvard University Press. ISBN   978-0-674-85431-4.
  2. Matthias Egger, G. Davey Smith, M. Schneider & C. Minder (September 1997). "Bias in meta-analysis detected by a simple, graphical test". BMJ . 315 (7109): 629–634. doi:10.1136/bmj.315.7109.629. PMC   2127453 . PMID   9310563.CS1 maint: multiple names: authors list (link)
  3. 1 2 3 Jonathan A. C. Sterne; Matthias Egger (October 2001). "Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis". Journal of Clinical Epidemiology . 54 (10): 1046–55. doi:10.1016/S0895-4356(01)00377-8. PMID   11576817.
  4. Joseph Lau, John P. A. Ioannidis, Norma Terrin, Christopher H. Schmid & Ingram Olkin (September 2006). "The case of the misleading funnel plot". BMJ . 333 (7568): 597–600. doi:10.1136/bmj.333.7568.597. PMC   1570006 . PMID   16974018.CS1 maint: multiple names: authors list (link)
  5. Jin-Ling Tang; Joseph LY Liu (May 2000). "Misleading funnel plot for detection of bias in meta-analysis". Journal of Clinical Epidemiology . 53 (5): 477–484. doi:10.1016/S0895-4356(99)00204-8. PMID   10812319.
  6. Terrin, N.; Schmid, C. H.; Lau, J. (2005). "In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias". Journal of Clinical Epidemiology. 58 (9): 894–901. doi:10.1016/j.jclinepi.2005.01.006. ISSN   0895-4356. PMID   16085192.

Further reading