Hierarchy of evidence

Last updated

A hierarchy of evidence, comprising levels of evidence (LOEs), that is, evidence levels (ELs), is a heuristic used to rank the relative strength of results obtained from experimental research, especially medical research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. [1] The design of the study (such as a case report for an individual patient or a blinded randomized controlled trial) and the endpoints measured (such as survival or quality of life) affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). [2] [3] Systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank the same as systematic review of completed high-quality observational studies in regard to the study of side effects. [4] Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

Contents

Definition

In 2014, Jacob Stegenga defined a hierarchy of evidence as "rank-ordering of kinds of methods according to the potential for that method to suffer from systematic bias". At the top of the hierarchy is a method with the most freedom from systemic bias or best internal validity relative to the tested medical intervention's hypothesized efficacy. [5] :313 In 1997, Greenhalgh suggested it was "the relative weight carried by the different types of primary study when making decisions about clinical interventions". [6]

The National Cancer Institute defines levels of evidence as "a ranking system used to describe the strength of the results measured in a clinical trial or research study. The design of the study [...] and the endpoints measured [...] affect the strength of the evidence." [7]

Examples

Research design and evidence - Capho.svg
Canadian Association of Pharmacy in Oncology [9]
Example hierarchies of evidence in medicine.

A large number of hierarchies of evidence have been proposed. Similar protocols for evaluation of research quality are still in development. So far, the available protocols pay relatively little attention to whether outcome research is relevant to efficacy (the outcome of a treatment performed under ideal conditions) or to effectiveness (the outcome of the treatment performed under ordinary, expectable conditions).[ citation needed ]

GRADE

The GRADE approach (Grading of Recommendations Assessment, Development and Evaluation) is a method of assessing the certainty in evidence (also known as quality of evidence or confidence in effect estimates) and the strength of recommendations. [10] The GRADE began in the year 2000 as a collaboration of methodologists, guideline developers, biostatisticians, clinicians, public health scientists and other interested members.[ citation needed ]

Over 100 organizations (including the World Health Organization, the UK National Institute for Health and Care Excellence (NICE), the Canadian Task Force for Preventive Health Care, the Colombian Ministry of Health, among others) have endorsed and/or are using GRADE to evaluate the quality of evidence and strength of health care recommendations. (See examples of clinical practice guidelines using GRADE online). [11] [12]

GRADES rates quality of evidence as follows: [13] [14]

HighThere is a lot of confidence that the true effect lies close to that of the estimated effect.
ModerateThere is moderate confidence in the estimated effect: The true effect is likely to be close to the estimated effect, but there is a possibility that it is substantially different.
LowThere is limited confidence in the estimated effect: The true effect might be substantially different from the estimated effect.
Very lowThere is very little confidence in the estimated effect: The true effect is likely to be substantially different from the estimated effect.

Guyatt and Sackett

In 1995, Guyatt and Sackett published the first such hierarchy. [15]

Greenhalgh put the different types of primary study in the following order: [6]

  1. Systematic reviews and meta-analyses of "RCTs with definitive results".
  2. RCTs with definitive results (confidence intervals that do not overlap the threshold clinically significant effect)
  3. RCTs with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
  4. Cohort studies
  5. Case–control studies
  6. Cross-sectional surveys
  7. Case reports

Saunders et al.

A protocol suggested by Saunders et al. assigns research reports to six categories, on the basis of research design, theoretical background, evidence of possible harm, and general acceptance. To be classified under this protocol, there must be descriptive publications, including a manual or similar description of the intervention. This protocol does not consider the nature of any comparison group, the effect of confounding variables, the nature of the statistical analysis, or a number of other criteria. Interventions are assessed as belonging to Category 1, well-supported, efficacious treatments, if there are two or more randomized controlled outcome studies comparing the target treatment to an appropriate alternative treatment and showing a significant advantage to the target treatment. Interventions are assigned to Category 2, supported and probably efficacious treatment, based on positive outcomes of nonrandomized designs with some form of control, which may involve a non-treatment group. Category 3, supported and acceptable treatment, includes interventions supported by one controlled or uncontrolled study, or by a series of single-subject studies, or by work with a different population than the one of interest. Category 4, promising and acceptable treatment, includes interventions that have no support except general acceptance and clinical anecdotal literature; however, any evidence of possible harm excludes treatments from this category. Category 5, innovative and novel treatment, includes interventions that are not thought to be harmful, but are not widely used or discussed in the literature. Category 6, concerning treatment, is the classification for treatments that have the possibility of doing harm, as well as having unknown or inappropriate theoretical foundations. [16]

Khan et al.

A protocol for evaluation of research quality was suggested by a report from the Centre for Reviews and Dissemination, prepared by Khan et al. and intended as a general method for assessing both medical and psychosocial interventions. While strongly encouraging the use of randomized designs, this protocol noted that such designs were useful only if they met demanding criteria, such as true randomization and concealment of the assigned treatment group from the client and from others, including the individuals assessing the outcome. The Khan et al. protocol emphasized the need to make comparisons on the basis of "intention to treat" in order to avoid problems related to greater attrition in one group. The Khan et al. protocol also presented demanding criteria for nonrandomized studies, including matching of groups on potential confounding variables and adequate descriptions of groups and treatments at every stage, and concealment of treatment choice from persons assessing the outcomes. This protocol did not provide a classification of levels of evidence, but included or excluded treatments from classification as evidence-based depending on whether the research met the stated standards. [17]

U.S. National Registry of Evidence-Based Practices and Programs

An assessment protocol has been developed by the U.S. National Registry of Evidence-Based Practices and Programs (NREPP). Evaluation under this protocol occurs only if an intervention has already had one or more positive outcomes, with a probability of less than .05, reported, if these have been published in a peer-reviewed journal or an evaluation report, and if documentation such as training materials has been made available. The NREPP evaluation, which assigns quality ratings from 0 to 4 to certain criteria, examines reliability and validity of outcome measures used in the research, evidence for intervention fidelity (predictable use of the treatment in the same way every time), levels of missing data and attrition, potential confounding variables, and the appropriateness of statistical handling, including sample size. [18]

History

Canada

The term was first used in a 1979 report by the "Canadian Task Force on the Periodic Health Examination" (CTF) to "grade the effectiveness of an intervention according to the quality of evidence obtained". [19] :1195 The task force used three levels, subdividing level II:

The CTF graded their recommendations into a 5-point A–E scale: A: Good level of evidence for the recommendation to consider a condition, B: Fair level of evidence for the recommendation to consider a condition, C: Poor level of evidence for the recommendation to consider a condition, D: Fair level evidence for the recommendation to exclude the condition, and E: Good level of evidence for the recommendation to exclude condition from consideration. [19] :1195 The CTF updated their report in 1984, [20] in 1986 [21] and 1987. [22]

United States

In 1988, the United States Preventive Services Task Force (USPSTF) came out with its guidelines based on the CTF using the same 3 levels, further subdividing level II. [23]

Over the years many more grading systems have been described. [24]

United Kingdom

In September 2000, the Oxford (UK) Centre for Evidence-Based Medicine (CEBM) Levels of Evidence published its guidelines for 'Levels' of evidence regarding claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening. It not only addressed therapy and prevention, but also diagnostic tests, prognostic markers, or harm. The original CEBM Levels was first released for Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. As published in 2009 [25] [26] they are:

In 2011, an international team redesigned the Oxford CEBM Levels to make it more understandable and to take into account recent developments in evidence ranking schemes. The Levels have been used by patients, clinicians and also to develop clinical guidelines including recommendations for the optimal use of phototherapy and topical therapy in psoriasis [27] and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada. [28]

Global

In 2007, the World Cancer Research Fund grading system described 4 levels: Convincing, probable, possible and insufficient evidence. [29] All Global Burden of Disease Studies have used it to evaluate epidemiologic evidence supporting causal relationships. [30]

Proponents

In 1995 Wilson et al., [31] in 1996 Hadorn et al. [32] and in 1996 Atkins et al. [33] have described and defended various types of grading systems.

Criticism

In 2011, a systematic review of the critical literature found 3 kinds of criticism: procedural aspects of EBM (especially from Cartwright, Worrall and Howick), [34] greater than expected fallibility of EBM (Ioaanidis and others), and EBM being incomplete as a philosophy of science (Ashcroft and others). [35] [ clarification needed ] Rawlins [36] and Bluhm note, that EBM limits the ability of research results to inform the care of individual patients, and that to understand the causes of diseases both population-level and laboratory research are necessary. EBM hierarchy of evidence does not take into account research on the safety and efficacy of medical interventions. RCTs should be designed "to elucidate within-group variability, which can only be done if the hierarchy of evidence is replaced by a network that takes into account the relationship between epidemiological and laboratory research" [37]

The hierarchy of evidence produced by a study design has been questioned, because guidelines have "failed to properly define key terms, weight the merits of certain non-randomized controlled trials, and employ a comprehensive list of study design limitations". [38]

Stegenga has criticized specifically that meta-analyses are placed at the top of such hierarchies. [39] The assumption that RCTs ought to be necessarily near the top of such hierarchies has been criticized by Worrall [40] and Cartwright. [41]

In 2005, Ross Upshur said that EBM claims to be a normative guide to being a better physician, but is not a philosophical doctrine. [42]

Borgerson in 2009 wrote that the justifications for the hierarchy levels are not absolute and do not epistemically justify them, but that "medical researchers should pay closer attention to social mechanisms for managing pervasive biases". [43] La Caze noted that basic science resides on the lower tiers of EBM though it "plays a role in specifying experiments, but also analysing and interpreting the data." [44]

Concato said in 2004, that it allowed RCTs too much authority and that not all research questions could be answered through RCTs, either because of practical or because of ethical issues. Even when evidence is available from high-quality RCTs, evidence from other study types may still be relevant. [45] Stegenga opined that evidence assessment schemes are unreasonably constraining and less informative than other schemes now available. [5]

In his 2015 PhD Thesis dedicated to the study of the various hierarchies of evidence in medicine, Christopher J Blunt concludes that although modest interpretations such as those offered by La Caze's model, conditional hierarchies like GRADE, and heuristic approaches as defended by Howick et al all survive previous philosophical criticism, he argues that modest interpretations are so weak they are unhelpful for clinical practice. For example, "GRADE and similar conditional models omit clinically relevant information, such as information about variation in treatments’ effects and the causes of different responses to therapy; and that heuristic approaches lack the necessary empirical support". Blunt further concludes that "hierarchies are a poor basis for the application of evidence in clinical practice", since the core assumptions behind hierarchies of evidence, that "information about average treatment effects backed by high-quality evidence can justify strong recommendations", is untenable, and hence the evidence from individuals studies should be appraised in isolation. [46]

See also

Related Research Articles

Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.

<span class="mw-page-title-main">Meta-analysis</span> Statistical method that summarizes data from multiple sources

A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analyses can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. It is thus a basic methodology of metascience. Meta-analytic results are considered the most trustworthy source of evidence by the evidence-based medicine literature.

<span class="mw-page-title-main">Randomized controlled trial</span> Form of scientific experiment

A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures or other medical treatments.

In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.

A cohort study is a particular form of longitudinal study that samples a cohort, performing a cross-section at intervals through time. It is a type of panel study where the individuals in the panel share a common characteristic.

<span class="mw-page-title-main">Pain management</span> Interdisciplinary approach for easing pain

Pain management is an aspect of medicine and health care involving relief of pain in various dimensions, from acute and simple to chronic and challenging. Most physicians and other health professionals provide some pain control in the normal course of their practice, and for the more complex instances of pain, they also call on additional help from a specific medical specialty devoted to pain, which is called pain medicine.

Eye movement desensitization and reprocessing (EMDR) is a form of psychotherapy that is controversial within the psychological community. It was devised by Francine Shapiro in 1987 and originally designed to alleviate the distress associated with traumatic memories such as post-traumatic stress disorder (PTSD).

<span class="mw-page-title-main">Medical guideline</span> Document with the aim of guiding decisions and criteria in healthcare

A medical guideline is a document with the aim of guiding decisions and criteria regarding diagnosis, management, and treatment in specific areas of healthcare. Such documents have been in use for thousands of years during the entire history of medicine. However, in contrast to previous approaches, which were often based on tradition or authority, modern medical guidelines are based on an examination of current evidence within the paradigm of evidence-based medicine. They usually include summarized consensus statements on best practice in healthcare. A healthcare provider is obliged to know the medical guidelines of their profession, and has to decide whether to follow the recommendations of a guideline for an individual treatment.

The Bobath concept is an approach to neurological rehabilitation that is applied in patient assessment and treatment. The goal of applying the Bobath concept is to promote motor learning for efficient motor control in various environments, thereby improving participation and function. This is done through specific patient handling skills to guide patients through the initiation and completing of intended tasks. This approach to neurological rehabilitation is multidisciplinary, primarily involving physiotherapists, occupational therapists, and speech and language therapists. In the United States, the Bobath concept is also known as 'neuro-developmental treatment' (NDT).

<span class="mw-page-title-main">Systematic review</span> Comprehensive review of research literature using systematic methods

A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic, then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.

Evidence-based practice is the idea that occupational practices ought to be based on scientific evidence. While seemingly obviously desirable, the proposal has been controversial, with some arguing that results may not specialize to individuals as well as traditional practices. Evidence-based practices have been gaining ground since the formal introduction of evidence-based medicine in 1992 and have spread to the allied health professions, education, management, law, public policy, architecture, and other fields. In light of studies showing problems in scientific research, there is also a movement to apply evidence-based practices in scientific research itself. Research into the evidence-based practice of science is called metascience.

The United States Preventive Services Task Force (USPSTF) is "an independent panel of experts in primary care and prevention that systematically reviews the evidence of effectiveness and develops recommendations for clinical preventive services". The task force, a volunteer panel of primary care clinicians with methodology experience including epidemiology, biostatistics, health services research, decision sciences, and health economics, is funded, staffed, and appointed by the U.S. Department of Health and Human Services' Agency for Healthcare Research and Quality.

Treatment of chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) is variable and uncertain, and the condition is primarily managed rather than cured.

Evidence-based dentistry (EBD) is the dental part of the more general movement toward evidence-based medicine and other evidence-based practices. The pervasive access to information on the internet includes different aspects of dentistry for both the dentists and patients. This has created a need to ensure that evidence referenced to are valid, reliable and of good quality.

Alessandro Liberati was an Italian healthcare researcher and clinical epidemiologist, and founder of the Italian Cochrane Centre.

The discipline of evidence-based toxicology (EBT) strives to transparently, consistently, and objectively assess available scientific evidence in order to answer questions in toxicology, the study of the adverse effects of chemical, physical, or biological agents on living organisms and the environment, including the prevention and amelioration of such effects. EBT has the potential to address concerns in the toxicological community about the limitations of current approaches to assessing the state of the science. These include concerns related to transparency in decision making, synthesis of different types of evidence, and the assessment of bias and credibility. Evidence-based toxicology has its roots in the larger movement towards evidence-based practices.

The philosophy of medicine is a branch of philosophy that explores issues in theory, research, and practice within the field of health sciences. More specifically in topics of epistemology, metaphysics, and medical ethics, which overlaps with bioethics. Philosophy and medicine, both beginning with the ancient Greeks, have had a long history of overlapping ideas. It was not until the nineteenth century that the professionalization of the philosophy of medicine came to be. In the late twentieth century, debates among philosophers and physicians ensued of whether the philosophy of medicine should be considered a field of its own from either philosophy or medicine. A consensus has since been reached that it is in fact a distinct discipline with its set of separate problems and questions. In recent years there have been a variety of university courses, journals, books, textbooks and conferences dedicated to the philosophy of medicine.

Chlamydia research is the systematic study of the organisms in the taxonomic group of bacteria Chlamydiota, the diagnostic procedures to treat infections, the disease chlamydia, infections caused by the organisms, the epidemiology of infection and the development of vaccines. The process of research can include the participation of many researchers who work in collaboration from separate organizations, governmental entities and universities.

The GRADE approach is a method of assessing the certainty in evidence and the strength of recommendations in health care. It provides a structured and transparent evaluation of the importance of outcomes of alternative management strategies, acknowledgment of patients and the public values and preferences, and comprehensive criteria for downgrading and upgrading certainty in evidence. It has important implications for those summarizing evidence for systematic reviews, health technology assessments, and clinical practice guidelines as well as other decision makers.

A pragmatic clinical trial (PCT), sometimes called a practical clinical trial (PCT), is a clinical trial that focuses on correlation between treatments and outcomes in real-world health system practice rather than focusing on proving causative explanations for outcomes, which requires extensive deconfounding with inclusion and exclusion criteria so strict that they risk rendering the trial results irrelevant to much of real-world practice.

References

  1. Siegfried T (2017-11-13). "Philosophical critique exposes flaws in medical evidence hierarchies". Science News. Retrieved 2018-05-16.
  2. Shafee, Thomas; Masukume, Gwinyai; Kipersztok, Lisa; Das, Diptanshu; Häggström, Mikael; Heilman, James (28 August 2017). "Evolution of Wikipedia's medical content: past, present and future". Journal of Epidemiology and Community Health. 71 (11): jech–2016–208601. doi:10.1136/jech-2016-208601. ISSN   0143-005X. PMC   5847101 . PMID   28847845.
  3. Straus SE, Richardson WS, Glasziou P, Haynes RB (2005). Evidence-based Medicine: How to Practice and Teach EBM (3rd ed.). Edinburgh: Churchill Livingstone. pp. 102–105. ISBN   978-0443074448.
  4. Golder, Su; Loke, Yoon K.; Bland, Martin (2011-05-03). Vandenbroucke, Jan P. (ed.). "Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview". PLOS Medicine. Public Library of Science (PLoS). 8 (5): e1001026. doi: 10.1371/journal.pmed.1001026 . ISSN   1549-1676. PMC   3086872 . PMID   21559325.
  5. 1 2 Stegenga J (October 2014). "Down with the hierarchies". Topoi. 33 (2): 313–322. doi:10.1007/s11245-013-9189-4. S2CID   109929514.
  6. 1 2 Greenhalgh T (July 1997). "How to read a paper. Getting your bearings (deciding what the paper is about)". BMJ. 315 (7102): 243–246. doi:10.1136/bmj.315.7102.243. PMC   2127173 . PMID   9253275.
  7. National Cancer Institute (n.d.). "NCI Dictionary of Cancer Terms: Levels of evidence". US DHHS-National Institutes of Health. Retrieved 8 December 2014.
  8. "Evidence-Based Decision Making: Introduction and Formulating Good Clinical Questions | Continuing Education Course | dentalcare.com Course Pages | DentalCare.com". www.dentalcare.com. Archived from the original on 4 Mar 2016. Retrieved 2015-09-03.
  9. "The Journey of Research - Levels of Evidence | CAPhO". www.capho.org. Archived from the original on 21 February 2016. Retrieved 2015-09-03.
  10. Schünemann, HJ; Best, D; Vist, G; Oxman, AD (2003). "Letters, numbers, symbols, and words: How best to communicate grades of evidence and recommendations?". Canadian Medical Association Journal. 169 (7): 677–680.
  11. "GRADEpro". Gradepro.org. Retrieved 16 August 2019.
  12. "Request Rejected". Archived from the original on 2016-02-25.
  13. Balshem, H; Helfand, M; Schünemann, HJ; Oxman, AD; Kunz, R; Brozek, J; Vist, GE; Falck-Ytter, Y; Meerpohl, J; Norris, S; Guyatt, GH (April 2011). "GRADE guidelines 3: rating the quality of evidence – introduction". Journal of Clinical Epidemiology. 64 (4): 401–406. doi: 10.1016/j.jclinepi.2010.07.015 . PMID   21208779.
  14. Reed Siemieniuk and Gordon Guyatt. "What is GRADE?". BMJ Best Practice. Retrieved 2020-07-02.
  15. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ (December 1995). "Users' guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group". JAMA. 274 (22): 1800–1804. doi:10.1001/jama.1995.03530220066035. PMID   7500513.
  16. Saunders, B., Berliner, L., & Hanson, R. (2004). Child physical and sexual abuse: Guidelines for treatments. Retrieved September 15, 2006, from http://www.musc.edu/cvc.guidel.htm%5B%5D
  17. Khan, K.S., et al. (2001). CRD Report 4. Stage II. Conducting the review. phase 5. Study quality assessment. York, UK: Centre for Reviews and Dissemination, University of York. Retrieved July 20, 2007 from http://www.york.ac.uk/inst/crd/pdf/crd_4ph5.pdf
  18. National Registry of Evidence-Based Practices and Programs (2007). NREPP Review Criteria. Retrieved March 10, 2008 from http://www.nrepp.samsha.gov/review-criteria.htm%5B%5D
  19. 1 2 Canadian Task Force on the Periodic Health Examination (3 November 1979). "Task Force Report: The periodic health examination". Can Med Assoc J. 121 (9): 1193–1254. PMC   1704686 . PMID   115569.
  20. Canadian Task Force on the Periodic Health Examination (15 May 1984). "Task Force Report: The periodic health examination. 2. 1984 update". Can Med Assoc J. 130 (10): 1278–1285. PMC   1483525 . PMID   6722691.
  21. Canadian Task Force on the Periodic Health Examination (15 May 1986). "Task Force Report: The periodic health examination. 3. 1986 update". Can Med Assoc J. 134 (10): 721–729.
  22. Canadian Task Force on the Periodic Health Examination (1 April 1988). "Task Force Report: The periodic health examination. 2. 1987 update". Can Med Assoc J. 138 (7): 618–626. PMC   1267740 . PMID   3355931.
  23. U.S. Preventive Services Task Force (1989). Guide to clinical preventive services: report of the U.S. Preventive Services Task Force. Diane Publishing. pp. 24–. ISBN   978-1568062976. Appendix A
  24. Welsh, Judith (January 2010). "Levels of evidence and analyzing the literature". National Institutes of Health Library. Retrieved 9 September 2015.
  25. "Oxford Centre for Evidence-based Medicine – Levels of Evidence (March 2009)". Centre for Evidence-Based Medicine. 2009-06-11. Retrieved 25 March 2015.
  26. Burns el al 2011.
  27. OCEBM Levels of Evidence Working Group (May 2016). "The Oxford Levels of Evidence 2'".
  28. Paul, C.; Gallini, A.; Archier, E.; et al. (2012). "Evidence-Based Recommendations on Topical Treatment and Phototherapy of Psoriasis: Systematic Review and Expert Opinion of a Panel of Dermatologists". Journal of the European Academy of Dermatology and Venereology. 26 (Suppl 3): 1–10. doi:10.1111/j.1468-3083.2012.04518.x. PMID   22512675. S2CID   36103291.
  29. World Cancer Research Fund AICR. Food, Nutrition, and Physical Activity, and the Prevention of Cancer: A Global Perspective. American Institute for Cancer Research, Washington, DC; 2007
  30. Lim, Stephen S; Vos, Theo; Flaxman, Abraham D; Danaei, Goodarz; Shibuya, Kenji; Adair-Rohani, Heather; Almazroa, Mohammad A; Amann, Markus; Anderson, H Ross; Andrews, Kathryn G; Aryee, Martin; Atkinson, Charles; Bacchus, Loraine J; Bahalim, Adil N; Balakrishnan, Kalpana; Balmes, John; Barker-Collo, Suzanne; Baxter, Amanda; Bell, Michelle L; Blore, Jed D; Blyth, Fiona; Bonner, Carissa; Borges, Guilherme; Bourne, Rupert; Boussinesq, Michel; Brauer, Michael; Brooks, Peter; Bruce, Nigel G; Brunekreef, Bert; et al. (2012). "A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: A systematic analysis for the Global Burden of Disease Study 2010". The Lancet. 380 (9859): 2224–2260. doi:10.1016/S0140-6736(12)61766-8. PMC   4156511 . PMID   23245609.
  31. Wilson, Mark C (1995). "Users' guides to the medical literature. VIII. How to use clinical practice guidelines. B. what are the recommendations and will they help you in caring for your patients? The evidence-based medicine working group". JAMA. 274 (20): 1630–1632. doi:10.1001/jama.1995.03530200066040. PMID   7474251. S2CID   8593521.
  32. Hadorn, David C; Baker, David; Hodges, James S; Hicks, Nicholas (1996). "Rating the quality of evidence for clinical practice guidelines". Journal of Clinical Epidemiology. 49 (7): 749–754. doi:10.1016/0895-4356(96)00019-4. PMID   8691224.
  33. Atkins, D; Best, D; Briss, P. A; Eccles, M; Falck-Ytter, Y; Flottorp, S; Guyatt, G. H; Harbour, R. T; Haugh, M. C; Henry, D; Hill, S; Jaeschke, R; Leng, G; Liberati, A; Magrini, N; Mason, J; Middleton, P; Mrukowicz, J; O'Connell, D; Oxman, A. D; Phillips, B; Schünemann, H. J; Edejer, T; Varonen, H; Vist, G. E; Williams Jr, J. W; Zaza, S; GRADE Working Group (2004). "Grading quality of evidence and strength of recommendations". BMJ. 328 (7454): 1490. doi:10.1136/bmj.328.7454.1490. PMC   428525 . PMID   15205295.
  34. Jeremy Howick (2011-02-23). The Philosophy of Evidence-based Medicine. John Wiley & Sons. ISBN   978-1-4443-4266-6.
  35. Solomon M (October 2011). "Just a paradigm: evidence-based medicine in epistemological context". European Journal for Philosophy of Science. Springer. 1 (3): 451–466. doi:10.1007/s13194-011-0034-6. S2CID   170193949.
  36. Rawlins M (December 2008). "De Testimonio: on the evidence for decisions about the use of therapeutic interventions". Clinical Medicine. Royal College of Physicians. 8 (6): 579–588. doi:10.7861/clinmedicine.8-6-579. PMC   4954394 . PMID   19149278.
  37. Bluhm R (October 2011). "From hierarchy to network: a richer view of evidence for evidence-based medicine". Perspectives in Biology and Medicine. Johns Hopkins University Press. 48 (4): 535–547. doi:10.1353/pbm.2005.0082. PMID   16227665. S2CID   1156284.
  38. Gugiu, PC; Westine, CD; Coryn, CL; Hobson, KA (3 April 2012). "An application of a new evidence grading system to research on the chronic care model". Eval Health Prof. 36 (1): 3–43. CiteSeerX   10.1.1.1016.5990 . doi:10.1177/0163278712436968. PMID   22473325. S2CID   206452088.
  39. Stegenga, J (2011). "Is meta-analysis the platinum standard of evidence?". Stud Hist Philos Biol Biomed Sci. 42 (4): 497–507. doi:10.1016/j.shpsc.2011.07.003. PMID   22035723.
  40. Worrall, John (2002). "What Evidence in Evidence‐Based Medicine?". Philosophy of Science. 69: S316–S330. doi:10.1086/341855. S2CID   55078796.
  41. Cartwright, Nancy (2007). "Are RCTs the Gold Standard?" (PDF). BioSocieties. 2: 11–20. doi:10.1017/s1745855207005029. S2CID   145592046.
  42. Upshur RE (Autumn 2005). "Looking for rules in a world of exceptions: reflections on evidence-based practice". Perspectives in Biology and Medicine. Johns Hopkins University Press. 48 (4): 477–489. doi:10.1353/pbm.2005.0098. PMID   16227661. S2CID   36678226.
  43. Borgerson K (Spring 2009). "Valuing evidence: bias and the evidence hierarchy of evidence-based medicine" (PDF). Perspectives in Biology and Medicine. Johns Hopkins University Press. 52 (2): 218–233. doi:10.1353/pbm.0.0086. PMID   19395821. S2CID   38324417.
  44. La Caze A (January 2011). "The role of basic science in evidence-based medicine". Biology & Philosophy . Springer. 26 (1): 81–98. doi:10.1007/s10539-010-9231-5. S2CID   189902678.
  45. Concato J (July 2004). "Observational versus experimental studies: what's the evidence for a hierarchy?". NeuroRx. Springer. 1 (3): 341–347. doi:10.1602/neurorx.1.3.341. PMC   534936 . PMID   15717036.
  46. Blunt, Christopher J (September 2015). Hierarchies of evidence in evidence-based medicine. PhD Thesis (phd). London School of Economics and Political Science.

Works cited

PD-icon.svg This article incorporates public domain material from Dictionary of Cancer Terms. U.S. National Cancer Institute.