In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects (for example about past medical history, smoking, sexual experiences). [1] In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available. [2]
In empirical research, authors may be under-reporting unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error. In this context, reporting bias can eventually lead to a status quo where multiple investigators discover and discard the same results, and later experimenters justify their own reporting bias by observing that previous experimenters reported different results. Thus, each incident of reporting bias can make future incidents more likely. [3]
Research can only contribute to knowledge if it is communicated from investigators to the community. The generally accepted primary means of communication is "full" publication of the study methods and results in an article published in a scientific journal. Sometimes, investigators choose to present their findings at a scientific meeting as well, either through an oral or poster presentation. These presentations are included as part of the scientific record as brief "abstracts" which may or may not be recorded in publicly accessible documents typically found in libraries or the World Wide Web.[ citation needed ]
Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available. [4]
Reporting bias occurs when the dissemination of research findings is influenced by the nature and direction of the results, for instance in systematic reviews. [5] [6] Positive results is a commonly used term to describe a study finding that one intervention is better than another.[ citation needed ]
Various attempts have been made to overcome the effects of the reporting biases, including statistical adjustments to the results of published studies. [7] None of these approaches has proved satisfactory, however, and there is increasing acceptance that reporting biases must be tackled by establishing registers of controlled trials and by promoting good publication practice. Until these problems have been addressed, estimates of the effects of treatments based on published evidence may be biased. [6] [8]
Litigation brought upon by consumers and health insurers against Pfizer for the fraudulent sales practices in marketing of the drug gabapentin in 2004 revealed a comprehensive publication strategy that employed elements of reporting bias. [9] Spin was used to put emphasis on favorable findings that favored gabapentin, and also to explain away unfavorable findings towards the drug. In this case, favorable secondary outcomes became the focus over the original primary outcome, which was unfavorable. Other changes found in outcome reporting include the introduction of a new primary outcome, failure to distinguish between primary and secondary outcomes, and failure to report one or more protocol-defined primary outcomes. [10]
The decision to publish certain findings in certain journals is another strategy. [9] Trials with statistically significant findings were generally published in academic journals with higher circulation more often than trials with nonsignificant findings. Timing of publication results of trials was influenced, in that the company tried to optimize the timing between the release of two studies. Trials with nonsignificant findings were found to be published in a staggered fashion, as to not have two consecutive trials published without salient findings. Ghost authorship was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.
Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation. [11]
The publication or nonpublication of research findings, depending on the nature and direction of the results. Although medical writers have acknowledged the problem of reporting biases for over a century, [12] it was not until the second half of the 20th century that researchers began to investigate the sources and size of the problem of reporting biases. [13]
Over the past two decades, evidence has accumulated that failure to publish research studies, including clinical trials testing intervention effectiveness, is pervasive. [13] Almost all failure to publish is due to failure of the investigator to submit; [14] only a small proportion of studies are not published because of rejection by journals. [15]
The most direct evidence of publication bias in the medical field comes from follow-up studies of research projects identified at the time of funding or ethics approval. [16] These studies have shown that "positive findings" is the principal factor associated with subsequent publication: researchers say that the reason they do not write up and submit reports of their research for publication is usually because they are "not interested" in the results (editorial rejection by journals is a rare cause of failure to publish).
Even those investigators who have initially published their results as conference abstracts are less likely to publish their findings in full unless the results are "significant". [17] This is a problem because data presented in abstracts are frequently preliminary or interim results and thus may not be reliable representations of what was found once all data were collected and analyzed. [18] In addition, abstracts are often not accessible to the public through journals, MEDLINE, or easily accessed databases. Many are published in conference programs, conference proceedings, or on CD-ROM, and are made available only to meeting registrants.
The main factor associated with failure to publish is negative or null findings. [19] Controlled trials that are eventually reported in full are published more rapidly if their results are positive. [18] Publication bias leads to overestimates of treatment effect in meta-analyses, which in turn can lead doctors and decision makers to believe a treatment is more useful than it is.
It is now well-established that publication bias with more favorable efficacy results is associated with the source of funding for studies that would not otherwise be explained through usual risk of bias assessments. [20]
The rapid or delayed publication of research findings, depending on the nature and direction of the results. In a systematic review of the literature, Hopewell and her colleagues found that overall, trials with "positive results" (statistically significant in favor of the experimental arm) were published about a year sooner than trials with "null or negative results" (not statistically significant or statistically significant in favor of the control arm). [18]
The multiple or singular publication of research findings, depending on the nature and direction of the results. Investigators may also publish the same findings multiple times using a variety of patterns of "duplicate" publication. [21] Many duplicates are published in journal supplements, potentially difficult to access literature. Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect.
The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results. There is also evidence that, compared to negative or null results, statistically significant results are on average published in journals with greater impact factors, [22] and that publication in the mainstream (non grey) literature is associated with an overall greater treatment effect compared to the grey literature. [23]
The citation or non-citation of research findings, depending on the nature and direction of the results. Authors tend to cite positive results over negative or null results, and this has been established over a broad cross section of topics. [24] [25] [26] [27] Differential citation may lead to a perception in the community that an intervention is effective when it is not, and it may lead to over-representation of positive findings in systematic reviews if those left uncited are difficult to locate.
Selective pooling of results in a meta-analysis is a form of citation bias that is particularly insidious in its potential to influence knowledge. To minimize bias, pooling of results from similar but separate studies requires an exhaustive search for all relevant studies. That is, a meta-analysis (or pooling of data from multiple studies) must always have emerged from a systematic review (not a selective review of the literature), even though a systematic review does not always have an associated meta-analysis.
The publication of research findings in a particular language, depending on the nature and direction of the results. There is longstanding question about whether there is a language bias such that investigators choose to publish their negative findings in non-English language journals and reserve their positive findings for English language journals. Some research has shown that language restrictions in systematic reviews can change the results of the review [28] and in other cases, authors have not found that such a bias exists. [29]
The frequency with which people write about actions, outcomes, or properties is not a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. People write about only some parts of the world around them; much of the information is left unsaid. [2] [30]
Bias –also known as spin- in research reporting interventional medical studies could arise from, misleading reporting, misleading interpretation and inadequate extrapolation of the results or conclusions. [31] [32] [33] Research reporting bias can take various forms. Misleading reporting could be attributed to failure to report adverse effects of a certain medical or surgical intervention, selective reporting of results, or inappropriate use of language in the conclusions as overstated conclusions or undue generalizations that are not supported by study's results. [31] [32] Misleading interpretation of research results could be attributed to misinterpretation of both statistically significant and statistically non-significant etcetera. [6] [8] [31] [33] Inadequate extrapolation of research results could be caused by extending the positive outcomes of research to include a broader or different patient population that was not originally investigated. [31] [32]
The selective reporting of some outcomes but not others, depending on the nature and direction of the results. [34] A study may be published in full, but pre-specified outcomes omitted or misrepresented. [10] [35] Efficacy outcomes that are statistically significant have a higher chance of being fully published compared to those that are not statistically significant. [6] [8] Research interpretation bias or spin is prevalent across medical publications irrespective of discipline i.e. surgical versus medical and irrespective of journal ranking or study's level-of-evidence hierarchy. Notable bias (spin) has been reported in the interpretation of results of randomized control trials, although these study designs rank top in the level-of-evidence hierarchy. [36] [37] [38] Contrastingly, a study found low prevalence of bias in the conclusions of non-randomized control trials published in high-ranking orthopedic publications. [39] Control for bias in research reporting can increase trust in the published medical literature and better inform evidence-based clinical practice.
Selective reporting of suspected or confirmed adverse treatment effects is an area for particular concern because of the potential for patient harm. In a study of adverse drug events submitted to Scandinavian drug licensing authorities, reports for published studies were less likely than unpublished studies to record adverse events (for example, 56 vs 77% respectively for Finnish trials involving psychotropic drugs). [40] Recent attention in the lay and scientific media on failure to accurately report adverse events for drugs (e.g., selective serotonin uptake inhibitors, rosiglitazone, rofecoxib) has resulted in additional publications, too numerous to review, indicating substantial selective outcome reporting (mainly suppression) of known or suspected adverse events.
Antidepressants are a class of medications used to treat major depressive disorder, anxiety disorders, chronic pain, and addiction.
Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.
Meta-analysis is a method of synthesis of quantitative data from multiple independent studies addressing a common research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. By combining these effect sizes the statistical power is improved and can resolve uncertainties or discrepancies found in individual studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review.
A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.
Cochrane is a British international charitable organisation formed to synthesize medical research findings to facilitate evidence-based choices about health interventions involving health professionals, patients and policy makers. It includes 53 review groups that are based at research institutions worldwide. Cochrane has over 37,000 volunteer experts from around the world.
The Cochrane Library is a collection of databases in medicine and other healthcare specialties provided by Cochrane and other organizations. At its core is the collection of Cochrane Reviews, a database of systematic reviews and meta-analyses that summarize and interpret the results of medical research. The Cochrane Library aims to make the results of well-conducted clinical trials readily available and is a key resource in evidence-based medicine.
In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.
In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.
A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic, then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.
A hierarchy of evidence, comprising levels of evidence (LOEs), that is, evidence levels (ELs), is a heuristic used to rank the relative strength of results obtained from experimental research, especially medical research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). Systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank the same as systematic review of completed high-quality observational studies in regard to the study of side effects. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).
A review article is an article that summarizes the current state of understanding on a topic within a certain discipline. A review article is generally considered a secondary source since it may analyze and discuss the method and conclusions in previously published studies. It resembles a survey article or, in news publishing, overview article, which also surveys and summarizes previously published primary and secondary sources, instead of reporting new facts and results. Survey articles are however considered tertiary sources, since they do not provide additional analysis and synthesis of new conclusions. A review of such sources is often referred to as a tertiary review.
The Jadad scale, sometimes known as Jadad scoring or the Oxford quality scoring system, is a procedure to assess the methodological quality of a clinical trial by objective criteria. It is named after Canadian-Colombian physician Alex Jadad who in 1996 described a system for allocating such trials a score of between zero and five (rigorous). It is the most widely used such assessment in the world, and as of May 2024, its seminal paper has been cited in over 24,500 scientific works.
John P. A. Ioannidis is a Greek-American physician-scientist, writer and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies scientific research itself - in other words, meta-research - primarily in clinical medicine and the social sciences.
"Why Most Published Research Findings Are False" is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. It is considered foundational to the field of metascience.
Funding bias, also known as sponsorship bias, funding outcome bias, funding publication bias, and funding effect, refers to the tendency of a scientific study to support the interests of the study's financial sponsor. This phenomenon is recognized sufficiently that researchers undertake studies to examine bias in past published studies. Funding bias has been associated, in particular, with research into chemical toxicity, tobacco, and pharmaceutical drugs. It is an instance of experimenter's bias.
AllTrials is a project advocating that clinical research adopt the principles of open research. The project summarizes itself as "All trials registered, all results reported": that is, all clinical trials should be listed in a clinical trials registry, and their results should always be shared as open data.
The United States Cochrane Center (USCC) was one of the 14 centers on the world that facilitated the work of the Cochrane Collaboration. The USCC was the reference center for all 50 US states and US territories, protectorates, and districts: the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the US Virgin Islands. The USCC was also the reference Center for the following countries: Antigua and Barbuda, Bahamas, Barbados, Belize, Dominica, Grenada, Guam, Guyana, Jamaica, Japan, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, and Trinidad and Tobago. The USCC discontinued on February 7, 2018.
Isabelle Boutron is a professor of epidemiology at the Université Paris Cité and head of the INSERM- METHODS team within the Centre of Research in Epidemiology and Statistics (CRESS). She was originally trained in rheumatology and later switched to a career in epidemiology and public health. She is also deputy director of the French EQUATOR Centre, member of the SPIRIT-CONSORT executive committee, director of Cochrane France and co-convenor of the Bias Methods group of the Cochrane Collaboration.
Preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection.
Outcome switching is the practice of changing the primary or secondary outcomes of a clinical trial after its initiation. An outcome is the goal of the clinical trial, such as survival after five years for cancer treatment. Outcome switching can lead to bias and undermine the reliability of the trial, for instance when outcomes are switched after researchers already have access to trial data. That way, researchers can cherry pick an outcome which is statistically significant.