Evidence-based policy

Last updated

Evidence-based policy (also known as evidence-based governance) is a concept in public policy that advocates for policy decisions to be grounded on, or influenced by, rigorously established objective evidence. This concept presents a stark contrast to policymaking predicated on ideology, 'common sense,' anecdotes, or personal intuitions. The methodology employed in evidence-based policy often includes comprehensive research methods such as randomized controlled trials (RCT). [1] Good data, analytical skills, and political support to the use of scientific information are typically seen as the crucial elements of an evidence-based approach. [2]

Contents

An individual or organisation is justified in claiming that a specific policy is evidence-based if, and only if, three conditions are met. First, the individual or organisation possesses comparative evidence about the effects of the specific policy in comparison to the effects of at least one alternative policy. Second, the specific policy is supported by this evidence according to at least one of the individual's or organisation's preferences in the given policy area. Third, the individual or organisation can provide a sound account for this support by explaining the evidence and preferences that lay the foundation for the claim. [3]

The effectiveness of evidence-based policy hinges upon the presence of quality data, proficient analytical skills, and political backing for the utilization of scientific information. [4]

While proponents of evidence-based policy have identified certain types of evidence, such as scientifically rigorous evaluation studies like randomized controlled trials, as optimal for policymakers to consider, others argue that not all policy-relevant areas are best served by quantitative research. This discrepancy has sparked debates about the types of evidence that should be utilized. For example, policies concerning human rights, public acceptability, or social justice may necessitate different forms of evidence than what randomized trials provide. Furthermore, evaluating policy often demands moral philosophical reasoning in addition to the assessment of intervention effects, which randomized trials primarily aim to provide. [5]

In response to such complexities, some policy scholars have moved away from using the term evidence-based policy, adopting alternatives like evidence-informed. This semantic shift allows for continued reflection on the need to elevate the rigor and quality of evidence used, while sidestepping some of the limitations or reductionist notions occasionally associated with the term evidence-based. Despite these nuances, the phrase "evidence-based policy" is still widely employed, generally signifying a desire for evidence to be used in a rigorous, high-quality, and unbiased manner, while avoiding its misuse for political ends. [6]

History

The shift towards contemporary evidence-based policy is deeply rooted in the broader movement towards evidence-based practice. This shift was largely influenced by the emergence of evidence-based medicine during the 1980s. [1] However, the term 'Evidence-based policy' was not adopted in the medical field until the 1990s. [7] In social policy, the term was not employed until the early 2000s. [8]

The initial instance of evidence-based policy was manifested in tariff-making in Australia. The legislation necessitated that tariffs be informed by a public report issued by the Tariff Board. This report would cover the tariff, industrial, and economic implications. [9]

History of evidence-based medicine

Evidence-based medicine (EBM) is a term that was first introduced by Gordon Guyatt. [10] Nevertheless, examples of EBM can be traced back to the early 1900s. Some contend that the earliest instance of EBM dates back to the 11th century when Ben Cao Tu Jing from the Song Dynasty suggested a method to evaluate the efficacy of ginseng. [11]

Many scholars regard evidence-based policy as an evolution from "evidence-based medicine", where research findings are utilized to support clinical decisions. In this model, evidence is collected through randomized controlled trials (RCTs) which compare a treatment group with a placebo group to measure outcomes. [12]

While the earliest published RCTs in medicine date back to the 1940s and 1950s, [1] the term 'evidence-based medicine' did not appear in published medical research until 1993. [7] In the same year, the Cochrane Collaboration was established in the UK. This organization works to keep all RCTs up-to-date and provides "Cochrane reviews", which present primary research in human health and health policy. [13]

The usage of the keyword EBM has seen a significant increase since the 2000s, and the influence of EBM has substantially expanded within the field of medicine. [14]

History of evidence-based policy making

The application of Randomized Controlled Trials in social policy was notably later than in the medical field. Although elements of an evidence-based approach can be traced back as far as the fourteenth century, it was popularized more recently during the tenure of the Blair Government in the United Kingdom. [9] This government expressed a desire to shift away from ideological decision-making in policy formulation. [9] For instance, a 1999 UK Government white paper, Modernising Government, emphasized the need for policies that "really deal with problems, are forward-looking and shaped by evidence rather than a response to short-term pressures; [and] tackle causes not symptoms." [15]

This shift in policy formulation led to an upswing in research and activism advocating for more evidence-based policy-making. As a result, the Campbell Collaboration was established in 1999 as a sibling organization to the Cochrane Collaboration. [12] [16] The Campbell Collaboration undertakes reviews of the most robust evidence, analyzing the impacts of social and educational policies and practices.

The Economic and Social Research Council (ESRC) furthered the drive for more evidence-based policymaking by granting £1.3 million to the Evidence Network in 1999. Similar to both the Campbell and Cochrane Collaborations, the Evidence Network functions as a hub for evidence-based policy and practice. [12] More recently, the Alliance for Useful Evidence was established, funded by the ESRC, Big Lottery, and Nesta, to advocate for the use of evidence in social policy and practice. The Alliance, operating throughout the UK, promotes the use of high-quality evidence to inform decisions on strategy, policy, and practice through advocacy, research publication, idea sharing, advice, event hosting, and training.

The application of evidence-based policy varies among practitioners. For instance, Michael Kremer and Rachel Glennerster, curious about strategies to enhance students' test scores, conducted randomized controlled trials in Kenya. They experimented with new textbooks and flip charts, and smaller class sizes, but they discovered that the only intervention that boosted school attendance was treating intestinal worms in children. [17] Their findings led to the establishment of the Deworm the World Initiative, a charity highly rated by GiveWell for its cost-effectiveness. [17]

Recent discussions have emerged about the potential conflicts of interest in evidence-based decision-making applied to public policy development. In their analysis of vocational education in prisons run by the California Department of Corrections, researchers Andrew J. Dick, William Rich, and Tony Waters found that political factors inevitably influenced "evidence-based decisions," which were ostensibly neutral and technocratic. They argue that when policymakers, who have a vested interest in validating previous political judgments, fund evidence, there is a risk of corruption, leading to policy-based evidence making. [18]

Methodology

Evidence-based policy employs various methodologies, but they all commonly share the following characteristics:

The methodology used in evidence-based policy aligns with the cost-benefit framework. It is designed to estimate a net payoff if the policy is implemented. Due to the difficulty in quantifying some effects and outcomes of the policy, the focus is primarily on whether benefits will outweigh costs, rather than assigning specific values. [9]

Types of evidence in evidence-based policy making

Various types of data can be considered evidence in evidence-based policy making. [19] The Scientific Method organizes this data into tests to validate or challenge specific beliefs or hypotheses. The outcomes of various tests may hold varying degrees of credibility within the scientific community, influenced by factors such as the type of blind experiment (blind vs. double-blind), sample size, and replication. Advocates for evidence-based policy strive to align societal needs (as framed within Maslow's Hierarchy of needs) with outcomes that the scientific method indicates as most probable. [20]

Quantitative evidence

Quantitative evidence for policymaking includes numerical data from peer-reviewed journals, public surveillance systems, or individual programs. Quantitative data can also be collected by the government or policymakers themselves through surveys. [19] Both evidence-based medicine (EBM) and evidence-based public health policy constructions extensively utilize quantitative evidence.

Qualitative evidence

Qualitative evidence comprises non-numerical data gathered through methods such as observations, interviews, or focus groups. It is often used to craft compelling narratives to influence decision-makers. [19] The distinction between qualitative and quantitative data does not imply a hierarchy; both types of evidence can be effective in different contexts. Policymaking often involves a combination of qualitative and quantitative evidence. [20]


Scholarly communication in policy-making

Example of the policy cycle concept Policycycle.png
Example of the policy cycle concept

Academics provide input to policy beyond the production of content relating to issues addressed via policy through various channels:

Evidence-based Policy Initiatives by Non-governmental Organizations

Overseas Development Institute

The Overseas Development Institute (ODI) asserts that research-based evidence can significantly influence policies that have profound impacts on lives. Illustrative examples mentioned in the UK's Department for International Development's (DFID) new research strategy include a 22% reduction in neonatal mortality in Ghana, achieved by encouraging women to initiate breastfeeding within one hour of childbirth, and a 43% decrease in mortality among HIV-positive children due to the use of a widely accessible antibiotic.

Following numerous policy initiatives, the ODI conducted an evaluation of their evidence-based policy efforts. This analysis identified several factors contributing to policy decisions that are only weakly informed by research-based evidence. Policy development processes are complex, seldom linear or logical, thus making the direct application of presented information by policy-makers an unlikely scenario. These factors encompass information gaps, secrecy, the necessity for rapid responses versus slow data availability, political expediency (what is popular), and a lack of interest among policy-makers in making policies more scientifically grounded. When a discrepancy is identified between the scientific process and political process, those seeking to reduce this gap face a choice: either to encourage politicians to adopt more scientific methods or to prompt scientists to employ more political strategies.

The Overseas Development Institute (ODI) suggested that, in the face of limited progress in evidence-based policy, individuals and organizations possessing relevant data should leverage the emotional appeal and narrative power typically associated with politics and advertising to influence decision-makers. Instead of relying solely on tools like cost–benefit analysis and logical frameworks, [28] the ODI recommended identifying key players, crafting compelling narratives, and simplifying complex research data into clear, persuasive stories. Rather than advocating for systemic changes to promote evidence-based policy, the ODI encouraged data holders to actively engage in the political process.

Furthermore, the ODI posited that transforming a person who merely 'finds' data into someone who actively 'uses' data within our current system necessitates a fundamental shift towards policy engagement over academic achievement. This shift implies greater involvement with the policy community, the development of a research agenda centered on policy issues instead of purely academic interests, the acquisition of new skills or the formation of multidisciplinary teams, the establishment of new internal systems and incentives, increased investment in communications, the production of a different range of outputs, and enhanced collaboration within partnerships and networks.

The Future Health Systems consortium, based on research undertaken in six countries across Asia and Africa, has identified several key strategies to enhance the incorporation of evidence into policy-making. [29] These strategies include enhancing the technical capacity of policy-makers; refining the presentation of research findings; leveraging social networks; and establishing forums to facilitate the connection between evidence and policy outcomes. [30] [31]

The Pew Charitable Trusts

The Pew Charitable Trusts is a non-governmental organization dedicated to using data, science, and facts to serve the public good. [32] One of its initiatives, the Results First, collaborates with different US states to promote the use of evidence-based policymaking in the development of their laws. [33] The initiative has created a framework that serves as an example of how to implement evidence-based policy.

Pew's 5 key components of evidence-based policy are: [32]

  1. Program Assessment: This involves systematic reviews of the available evidence on the effectiveness of public programs, the development of a comprehensive inventory of funded programs, categorization of these programs by their proven effectiveness, and identification of their potential return on investment.
  2. Budget Development: This process incorporates the evidence of program effectiveness into budget and policy decisions, prioritizing funding for programs that deliver a high return on investment. It involves integrating program performance information into the budget development process, presenting information to policymakers in user-friendly formats, including relevant studies in budget hearings and committee meetings, establishing incentives for implementing evidence-based programs and practices, and building performance requirements into grants and contracts.
  3. Implementation Oversight: This ensures that programs are effectively delivered and remain faithful to their intended design. Key aspects include establishing quality standards for program implementation, building and maintaining capacity for ongoing quality improvement and monitoring of fidelity to program design, balancing program fidelity requirements with local needs, and conducting data-driven reviews to improve program performance.
  4. Outcome Monitoring: This involves routinely measuring and reporting outcome data to determine whether programs are achieving their desired results. It includes developing meaningful outcome measures for programs, agencies, and the community, conducting regular audits of systems for collecting and reporting performance data, and regularly reporting performance data to policymakers.
  5. Targeted Evaluation: This process involves conducting rigorous evaluations of new and untested programs to ensure they warrant continued funding. This includes leveraging available resources to conduct evaluations, targeting evaluations to high-priority programs, making better use of administrative data for program evaluations, requiring evaluations as a condition for continued funding for new initiatives, and developing a centralized repository for program evaluations.

The Coalition for Evidence-Based Policy

The Coalition for Evidence-Based Policy was a nonprofit, nonpartisan organization, whose mission was to increase government effectiveness through the use of rigorous evidence about "what works." Since 2001, the Coalition worked with U.S. Congressional and Executive Branch officials and advanced evidence-based reforms in U.S. social programs, which have been enacted into law and policy. The Coalition claimed to have no affiliation with any programs or program models, and no financial interest in the policy ideas it supported, enabling it to serve as an independent, objective source of expertise to government officials on evidence-based policy. [34] [ unreliable source ]

Major new policy initiatives that were enacted into law with the work of Coalition with congressional and executive branch officials. [35]

Their website now says "The Coalition wound down its operations in the spring of 2015, and the Coalition's leadership and core elements of the group's work have been integrated into the Laura and John Arnold Foundation". [36] In 2003 the Coalition published a guide on educational evidenced-based practices. [37]

Cost-benefit analysis in evidence-based policy

Cost-benefit analysis (CBA) is a method used in evidence-based policy. It's an economic tool used to assess the economic, social, and environmental impacts of policies. The aim is to guide policymakers toward decisions that increase societal welfare. [38]

The use of cost-benefit analysis in policy-making was first mandated by President Ronald Reagan's Executive Order 12291 in 1981. This order stated that administrative decisions should use sufficient information regarding the potential impacts of regulation. Maximizing the net benefits to society was a primary focus among the five general requirements of the order. [39]

Later presidents, including Bill Clinton and Barack Obama, modified but still emphasized the importance of cost-benefit analysis in their executive orders. For example, Clinton's Executive Order 12866 kept the need for cost-benefit analysis but also stressed the importance of flexibility, public involvement, and coordination among agencies. [40]

During Obama's administration, Executive Order 13563 further strengthened the role of cost-benefit analysis in regulatory review. It encouraged agencies to consider values that are hard or impossible to quantify, like equity, human dignity, and fairness. [41]

The use of cost-benefit analysis in these executive orders highlights its importance in evidence-based policy. By comparing the potential impacts of different policy options, cost-benefit analysis aids in making policy decisions that are based on empirical evidence and designed to maximize societal benefits.

Critiques

Evidence-based policy has faced several critiques. Paul Cairney, a professor of politics and public policy at the University of Stirling in Scotland, contends [42] that proponents of the approach often underestimate the complexity of policy-making and misconstrue how policy decisions are typically made. Nancy Cartwright and Jeremy Hardie [43] question the emphasis on randomized controlled trials (RCTs), arguing that evidence from RCTs is not always sufficient for making decisions. They suggest that applying experimental evidence to a policy context requires an understanding of the conditions present within the experimental setting and an assertion that these conditions also exist in the target environment of the proposed intervention. Additionally, they argue that the prioritization of RCTs could lead to the criticism of evidence-based policy being overly focused on narrowly defined 'interventions', which implies surgical actions on one causal factor to influence its effect.

The concept of intervention within the evidence-based policy movement aligns with James Woodward's interventionist theory of causality. [44] However, policy-making also involves other types of decisions, such as institutional reforms and predictive actions. These other forms of evidence-based decision-making do not necessitate evidence of an invariant causal relationship under intervention. Hence, mechanistic evidence and observational studies are often adequate for implementing institutional reforms and actions that do not alter the causes of a causal claim. [45]

Furthermore, there have been reports [46] of frontline public servants, such as hospital managers, making decisions that detrimentally affect patient care to meet predetermined targets. This argument was presented by Professor Jerry Muller of the Catholic University of America in his book The Tyranny of Metrics. [47]

See also

Related Research Articles

Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.

Public policy is an institutionalized proposal or a decided set of elements like laws, regulations, guidelines, and actions to solve or address relevant and real-world problems, guided by a conception and often implemented by programs. These policies govern and include various aspects of life such as education, health care, employment, finance, economics, transportation, and all over elements of society. The implementation of public policy is known as public administration. Public policy can be considered to be the sum of a government's direct and indirect activities and has been conceptualized in a variety of ways.

<span class="mw-page-title-main">Randomized controlled trial</span> Form of scientific experiment

A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures or other medical treatments.

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.

Policy analysis or public policy analysis is a technique used in the public administration sub-field of political science to enable civil servants, nonprofit organizations, and others to examine and evaluate the available options to implement the goals of laws and elected officials. People who regularly use policy analysis skills and techniques on the job, particularly those who use it as a major part of their job duties are generally known by the title policy analyst. The process is also used in the administration of large organizations with complex policies. It has been defined as the process of "determining which of various policies will achieve a given set of goals in light of the relations between the policies and the goals."

Evidence-based practice is the idea that occupational practices ought to be based on scientific evidence. While seemingly obviously desirable, the proposal has been controversial, with some arguing that results may not specialize to individuals as well as traditional practices. Evidence-based practices have been gaining ground since the formal introduction of evidence-based medicine in 1992 and have spread to the allied health professions, education, management, law, public policy, architecture, and other fields. In light of studies showing problems in scientific research, there is also a movement to apply evidence-based practices in scientific research itself. Research into the evidence-based practice of science is called metascience.

A hierarchy of evidence, comprising levels of evidence (LOEs), that is, evidence levels (ELs), is a heuristic used to rank the relative strength of results obtained from experimental research, especially medical research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). Systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank the same as systematic review of completed high-quality observational studies in regard to the study of side effects. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones. In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants' well-being have changed if the intervention had not been undertaken? This involves counterfactual analysis, that is, "a comparison between what actually happened and what would have happened in the absence of the intervention." Impact evaluations seek to answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly attributable to a program.

Outcomes research is a branch of public health research which studies the end results of the structure and processes of the health care system on the health and well-being of patients and populations. According to one medical outcomes and guidelines source book - 1996, Outcomes research includes health services research that focuses on identifying variations in medical procedures and associated health outcomes. Though listed as a synonym for the National Library of Medicine MeSH term "Outcome Assessment ", outcomes research may refer to both health services research and healthcare outcomes assessment, which aims at health technology assessment, decision making, and policy analysis through systematic evaluation of quality of care, access, and effectiveness.

Patient participation is a trend that arose in answer to medical paternalism. Informed consent is a process where patients make decisions informed by the advice of medical professionals.

MEASURE Evaluation aims to strengthen the capacity of developing countries to gather, interpret, and use data to improve health. MEASURE Evaluation creates tools and approaches for rigorous evaluations, providing evidence to address health challenges, and strengthening health information systems so countries can make better decisions and sustain good health outcomes over time. MEASURE Evaluation is a cooperative agreement awarded by the U.S. Agency for International Development (USAID) to the Carolina Population Center at the University of North Carolina at Chapel Hill and five partner organizations: ICF International, John Snow Inc., Management Sciences for Health, Palladium, and Tulane University. This MEASURE Evaluation partnership provides technical leadership through collaboration at local, national, and global levels to build the sustainable capacity of developing nations to identify data needs, collect and analyze technically sound data, and use that data for health decision-making.

<span class="mw-page-title-main">Evidence-based education</span> Paradigm of the education field

Evidence-based education (EBE) is the principle that education practices should be based on the best available scientific evidence, rather than tradition, personal judgement, or other influences. Evidence-based education is related to evidence-based teaching, evidence-based learning, and school effectiveness research. For example, research has shown that spaced repetition "leads to more robust memory formation than massed training does, which involves short or no intervals".

David M. Eddy is an American physician, mathematician, and healthcare analyst who has done seminal work in mathematical modeling of diseases, clinical practice guidelines, and evidence-based medicine. Four highlights of his career have been summarized by the Institute of Medicine of the National Academy of Sciences: "more than 25 years ago, Eddy wrote the seminal paper on the role of guidelines in medical decision-making, the first Markov model applied to clinical problems, and the original criteria for coverage decisions; he was the first to use and publish the term 'evidence-based'."

<span class="mw-page-title-main">Advocacy evaluation</span>

Advocacy evaluation, also called public policy advocacy design, monitoring, and evaluation, evaluates the progress or outcomes of advocacy, such as changes in public policy.

The discipline of evidence-based toxicology (EBT) strives to transparently, consistently, and objectively assess available scientific evidence in order to answer questions in toxicology, the study of the adverse effects of chemical, physical, or biological agents on living organisms and the environment, including the prevention and amelioration of such effects. EBT has the potential to address concerns in the toxicological community about the limitations of current approaches to assessing the state of the science. These include concerns related to transparency in decision making, synthesis of different types of evidence, and the assessment of bias and credibility. Evidence-based toxicology has its roots in the larger movement towards evidence-based practices.

School-based prevention programs are initiatives implemented into school settings that aim to increase children's academic success and reduce high-risk problem behaviors.

The Rhode Island Innovative Policy Lab (RIIPL) is an interdisciplinary collaboration between the Office of the Governor of Rhode Island and researchers at Brown University. The lab's mission is to help state agencies design evidence-based policies that improve the quality of life for Rhode Islanders.

A pragmatic clinical trial (PCT), sometimes called a practical clinical trial (PCT), is a clinical trial that focuses on correlation between treatments and outcomes in real-world health system practice rather than focusing on proving causative explanations for outcomes, which requires extensive deconfounding with inclusion and exclusion criteria so strict that they risk rendering the trial results irrelevant to much of real-world practice.

Public policy research is a multidisciplinary field that delves into the systematic examination and comprehensive analysis of policy matters and their far-reaching implications on society as a whole. The field explores diverse facets of public policy including political and administrative systems, institutions, actors, norms and traditions, communication and knowledge practices and the conception, execution and evaluation of policy decisions. Public policy research and policy analysis is conducted in multiple sectors including academic institutions, think tanks, consulting firms, not for profit organisations and government agencies. It is a major subfield of political science but is also a subfield in many other areas including public health and political economy. Research involves consideration of the interplay between various stakeholders, including policymakers, interest groups, and the general public, as well as an examination of the societal, economic, and political factors that shape policy decision-making processes. Public policy researchers explore the complexities of policy formulation with the aim to contribute both to understanding and improving the policy process overall, and to enhancing public policy effectiveness and societal well-being in specific policy arenas.

References

  1. 1 2 3 Baron, Jon (1 July 2018). "A Brief History of Evidence-Based Policy". The Annals of the American Academy of Political and Social Science. 678 (1): 40–50. doi:10.1177/0002716218763128. ISSN   0002-7162. S2CID   149924800.
  2. Head, Brian. (2009). Evidence-based policy: principles and requirements Archived 28 November 2010 at the Wayback Machine . University of Queensland. Retrieved 4 June 2010.
  3. Gade, Christian (2023). "When is it justified to claim that a practice or policy is evidence-based? Reflections on evidence and preferences". Evidence & Policy: 1–10. doi: 10.1332/174426421X16905606522863 . S2CID   261138726. Creative Commons by small.svg  This article incorporates text available under the CC BY 4.0 license.
  4. Head, Brian. (2009). Evidence-based policy: principles and requirements Archived 28 November 2010 at the Wayback Machine . University of Queensland. Retrieved 4 June 2010.
  5. Petticrew, M (2003). "Evidence, hierarchies, and typologies: Horses for courses". Journal of Epidemiology & Community Health. 57 (7): 527–529. doi:10.1136/jech.57.7.527. PMC   1732497 . PMID   12821702.
  6. Parkhurst, Justin (2017). The Politics of Evidence: from Evidence Based Policy to the Good Governance of Evidence (PDF). London: Routledge. doi:10.4324/9781315675008. ISBN   978-1138939400.[ page needed ]
  7. 1 2 Guyatt, G. H. (1 December 1993). "Users' guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? Evidence-Based Medicine Working Group". JAMA: The Journal of the American Medical Association. 270 (21): 2598–2601. doi:10.1001/jama.270.21.2598. ISSN   0098-7484. PMID   8230645.
  8. Hammersley, M. (2013) The Myth of Research-Based Policy and Practice, London, Sage.
  9. 1 2 3 4 Banks, Gary (2009). Evidence-based policy making: What is it? How do we get it? Archived 23 January 2010 at the Wayback Machine . Australian Government, Productivity Commission. Retrieved 4 June 2010
  10. Guyatt, Gordon H. (1 March 1991). "Evidence-based medicine". ACP Journal Club. 114 (2): A16. doi:10.7326/ACPJC-1991-114-2-A16. ISSN   1056-8751. S2CID   78930206.
  11. Payne-Palacio, June R.; Canter, Deborah D. (2016). The Profession of Dietetics: A Team Approach. Jones & Bartlett Learning. ISBN   978-1284126358.
  12. 1 2 3 Marston & Watts. Tampering with the Evidence: A Critical Appraisal of Evidence-Based Policy-Making Archived 23 March 2012 at the Wayback Machine . RMIT University. Retrieved 10 September 2014.
  13. The Cochrane Collaboration Retrieved 10 September 2014.
  14. Claridge, Jeffrey A.; Fabian, Timothy C. (1 May 2005). "History and Development of Evidence-based Medicine". World Journal of Surgery. 29 (5): 547–553. doi:10.1007/s00268-005-7910-1. ISSN   1432-2323. PMID   15827845. S2CID   21457159.
  15. "Evidence-based policy making". Department for Environment, Food and Rural Affairs. 21 September 2006. Archived from the original on 14 January 2011. Retrieved 6 March 2010.
  16. The Campbell Collaboration Retrieved 10 September 2014.
  17. 1 2 Thompson, Derek (15 June 2015). "The Greatest Good". The Atlantic. Archived from the original on 20 August 2019. Retrieved 6 March 2017.
  18. Dick, Andrew J.; Rich, William; Waters, Tony (2016). Prison Vocational Education and Policy in the United States. New York: Palgrave Macmillan. pp. 11–40, 281–306.
  19. 1 2 3 Brownson, Ross C.; Chriqui, Jamie F.; Stamatakis, Katherine A. (2009). "Understanding Evidence-Based Public Health Policy". American Journal of Public Health. 99 (9): 1576–1583. doi:10.2105/AJPH.2008.156224. ISSN   0090-0036. PMC   2724448 . PMID   19608941.
  20. 1 2 Court, Julius; Sutcliffe, Sophie (November 2005). "Evidence-Based Policymaking: What is it? How does it work? What relevance for developing countries?" (PDF). Overseas Development Institute).
  21. Hoffman, Steven J.; Baral, Prativa; Rogers Van Katwyk, Susan; Sritharan, Lathika; Hughsam, Matthew; Randhawa, Harkanwal; Lin, Gigi; Campbell, Sophie; Campus, Brooke; Dantas, Maria; Foroughian, Neda; Groux, Gaëlle; Gunn, Elliot; Guyatt, Gordon; Habibi, Roojin; Karabit, Mina; Karir, Aneesh; Kruja, Krista; Lavis, John N.; Lee, Olivia; Li, Binxi; Nagi, Ranjana; Naicker, Kiyuri; Røttingen, John-Arne; Sahar, Nicola; Srivastava, Archita; Tejpar, Ali; Tran, Maxwell; Zhang, Yu-qing; Zhou, Qi; Poirier, Mathieu J. P. (9 August 2022). "International treaties have mostly failed to produce their intended effects". Proceedings of the National Academy of Sciences. 119 (32): e2122854119. Bibcode:2022PNAS..11922854H. doi: 10.1073/pnas.2122854119 . ISSN   0027-8424. PMC   9372541 . PMID   35914153.
  22. "AR6 Synthesis Report: Climate Change 2023 — IPCC" . Retrieved 18 April 2023.
  23. Weidner, Till; Guillén-Gosálbez, Gonzalo (15 February 2023). "Planetary boundaries assessment of deep decarbonisation options for building heating in the European Union". Energy Conversion and Management. 278: 116602. doi: 10.1016/j.enconman.2022.116602 . hdl: 20.500.11850/599236 . ISSN   0196-8904.
  24. "GermanZero - Creating a better climate". germanzero.de. Retrieved 17 May 2023.
  25. Capstick, Stuart; Thierry, Aaron; Cox, Emily; Berglund, Oscar; Westlake, Steve; Steinberger, Julia K. (September 2022). "Civil disobedience by scientists helps press for urgent climate action" . Nature Climate Change. 12 (9): 773–774. Bibcode:2022NatCC..12..773C. doi:10.1038/s41558-022-01461-y. ISSN   1758-6798. S2CID   251912378.
  26. Ripple, William J; Wolf, Christopher; Newsome, Thomas M; Barnard, Phoebe; Moomaw, William R (5 November 2019). "World Scientists' Warning of a Climate Emergency". BioScience. doi:10.1093/biosci/biz088. hdl: 2445/151800 .
  27. Bik, Holly M.; Goldstein, Miriam C. (23 April 2013). "An Introduction to Social Media for Scientists". PLOS Biology. 11 (4): e1001535. doi: 10.1371/journal.pbio.1001535 . ISSN   1545-7885. PMC   3635859 . PMID   23630451.
  28. "Policy Entrepreneurs: Their Activity Structure and Function in the Policy Process". Journal of Public Administration Research and Theory. 1991. doi:10.1093/oxfordjournals.jpart.a037081. hdl:10945/53405.
  29. Syed, Shamsuzzoha B; Hyder, Adnan A; Bloom, Gerald; Sundaram, Sandhya; Bhuiya, Abbas; Zhenzhong, Zhang; Kanjilal, Barun; Oladepo, Oladimeji; Pariyo, George; Peters, David H (2008). "Exploring evidence-policy linkages in health research plans: A case study from six countries". Health Research Policy and Systems. 6: 4. doi: 10.1186/1478-4505-6-4 . PMC   2329631 . PMID   18331651.
  30. Hyder, A; et al. (14 June 2010). "National Policy-Makers Speak Out: Are Researchers Giving Them What They Need?". Health Policy and Planning. Archived from the original on 1 June 2019. Retrieved 26 May 2012.
  31. Hyder, A; Syed, S; Puvanachandra, P; Bloom, G; Sundaram, S; Mahmood, S; Iqbal, M; Hongwen, Z; Ravichandran, N; Oladepo, O; Pariyo, G; Peters, D (2010). "Stakeholder analysis for health research: Case studies from low- and middle-income countries". Public Health. 124 (3): 159–166. doi:10.1016/j.puhe.2009.12.006. PMID   20227095.
  32. 1 2 "Evidence-Based Policymaking" (PDF). 2014 via The Pew Charitable Trust.{{cite journal}}: Cite journal requires |journal= (help)
  33. "About The Pew Charitable Trusts". pew.org. Retrieved 8 December 2021.
  34. The Coalition for Evidence-Based Policy. Retrieved 18 September 2014.
  35. "Coalition for Evidence-Based Policy" . Retrieved 8 December 2021.
  36. "Coalition for Evidence-Based Policy".
  37. "Identifying and Implementing Educational Practices Supported By Rigorous Evidence: A User-Friendly Guide, 2003" (PDF).
  38. Boardman, Anthony E. "Cost-Benefit Analysis: Concepts and Practice". Cambridge Core. Retrieved 14 May 2023.
  39. "Executive Order 12291". Wikisource. Retrieved 14 May 2023.
  40. "Executive Orders Disposition Tables Clinton - 1993". National Archives. 15 August 2016. Retrieved 14 May 2023.
  41. "Executive Order 13563 - Improving Regulation and Regulatory Review". Obama White House Archives. 18 January 2011. Retrieved 14 May 2023.
  42. Cairney, Paul (2016). The politics of evidence-based policy making. New York. ISBN   978-1137517814. OCLC   946724638.{{cite book}}: CS1 maint: location missing publisher (link)
  43. Cartwright, Nancy; Hardie, Jeremy (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford University Press. ISBN   978-0199986705.
  44. Woodward, James (2005). Making Things Happen: A Theory of Causal Explanation. Oxford University Press. ISBN   978-0198035336.
  45. Maziarz, Mariusz (2020). The Philosophy of Causality in Economics: Causal Inferences and Policy Proposals. London & New York: Routledge.
  46. "Government by numbers: how data is damaging our public services". Apolitical. Retrieved 10 April 2018.
  47. Muller, Jerry Z. (2017). The tyranny of metrics. Princeton. ISBN   978-0691174952. OCLC   1005121833.{{cite book}}: CS1 maint: location missing publisher (link)

Further reading