Preregistration (science)

Last updated

Preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. [1] [2] Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection. [3]

Contents

Preregistration can have a number of different goals, [4] including (a) facilitating and documenting research plans, (b) identifying and reducing questionable research practices and researcher biases, [5] (c) distinguishing between confirmatory and exploratory analyses, [6] (d) transparently evaluating the severity of hypothesis tests, [7] and, in the case of Registered Reports, (e) facilitating results-blind peer review, and (f) reducing publication bias. [8]

A number of research practices such as p-hacking, publication bias, data dredging, inappropriate forms of post hoc analysis, and HARKing may increase the probability of incorrect claims. Although the idea of preregistration is old, [9] the practice of preregistering studies has gained prominence to mitigate to some of the issues that are thought to underlie the replication crisis. [1]

Types

Standard preregistration

In the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. Ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. This preregistration document is then posted on a publicly available website such as the Open Science Framework or AsPredicted. The preregistered study is then conducted, and a report of the study and its results are submitted for publication together with access to the preregistration document. This preregistration approach allows peer reviewers and subsequent readers to cross-reference the preregistration document with the published research article in order to identify the presence of any undisclosed deviations of the preregistration. Deviations from the preregistration are possible and common in practice, but they should be transparently reported, and the consequences for the severity of the test should be evaluated. [10]

Registered reports

The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. [11] [12] Once the theoretical introduction, method, and analysis plan has been peer reviewed (Stage 1 peer review), publication of the findings is provisionally guaranteed (in principle acceptance). The proposed study is then performed, and the research report is submitted for Stage 2 peer review. Stage 2 peer review confirms that the actual research methods are consistent with the preregistered protocol, that quality thresholds are met (e.g., manipulation checks confirm the validity of the experimental manipulation), and that the conclusions follow from the data. Because studies are accepted for publication regardless of whether the results are statistically significant Registered Reports prevent publication bias. Meta-scientific research has shown that the percentage of non-significant results in Registered Reports is substantially higher than in standard publications. [13] [14]

Specialised preregistration

Preregistration can be used in relation to a variety of different research designs and methods, including:

Clinical trial registration

Clinical trial registration is the practice of documenting clinical trials before they are performed in a clinical trials registry so as to combat publication bias and selective reporting. [25] Registration of clinical trials is required in some countries and is increasingly being standardized. [26] Some top medical journals will only publish the results of trials that have been pre-registered. [27]

A clinical trials registry is a platform which catalogs registered clinical trials. ClinicalTrials.gov, run by the United States National Library of Medicine (NLM) was the first online registry for clinical trials, and remains the largest and most widely used. In addition to combating bias, clinical trial registries serve to increase transparency and access to clinical trials for the public. Clinical trials registries are often searchable (e.g. by disease/indication, drug, location, etc.). Trials are registered by the pharmaceutical, biotech or medical device company (Sponsor) or by the hospital or foundation which is sponsoring the study, or by another organization, such as a contract research organization (CRO) which is running the study.

There has been a push from governments and international organizations, especially since 2005, to make clinical trial information more widely available and to standardize registries and processes of registering. The World Health Organization is working toward "achieving consensus on both the minimal and the optimal operating standards for trial registration". [28]

Creation and development

For many years, scientists and others have worried about reporting biases such that negative or null results from initiated clinical trials may be less likely to be published than positive results, thus skewing the literature and our understanding of how well interventions work. [29] This worry has been international and written about for over 50 years. [30] One of the proposals to address this potential bias was a comprehensive register of initiated clinical trials that would inform the public which trials had been started. [31] Ethical issues were those that seemed to interest the public most, as trialists (including those with potential commercial gain) benefited from those who enrolled in trials, but were not required to “give back,” telling the public what they had learned.

Those who were particularly concerned by the double standard were systematic reviewers, those who summarize what is known from clinical trials. If the literature is skewed, then the results of a systematic review are also likely to be skewed, possibly favoring the test intervention when in fact the accumulated data do not show this, if all data were made public.

ClinicalTrials.gov was originally developed largely as a result of breast cancer consumer lobbying, which led to authorizing language in the FDA Modernization Act of 1997 (Food and Drug Administration Modernization Act of 1997. Pub L No. 105-115, §113 Stat 2296), but the law provided neither funding nor a mechanism of enforcement. In addition, the law required that ClinicalTrials.gov only include trials of serious and life-threatening diseases.

Then, two events occurred in 2004 that increased public awareness of the problems of reporting bias. First, the then-New York State Attorney General Eliot Spitzer sued GlaxoSmithKline (GSK) because they had failed to reveal results from trials showing that certain antidepressants might be harmful. [32]

Shortly thereafter, the International Committee of Medical Journal Editors (ICMJE) announced that their journals would not publish reports of trials unless they had been registered. The ICMJE action was probably the most important motivator for trial registration, as investigators wanted to reserve the possibility that they could publish their results in prestigious journals, should they want to.

In 2007, the Food and Drug Administration Amendments Act of 2007 (FDAAA) clarified the requirements for registration and also set penalties for non-compliance (Public Law 110-85. The Food and Drug Administration Amendments Act of 2007 .

International participation

The International Committee of Medical Journal Editors (ICMJE) decided that from July 1, 2005 no trials will be considered for publication unless they are included on a clinical trials registry. [33] [34] The World Health Organization has begun the push for clinical trial registration with the initiation of the International Clinical Trials Registry Platform. There has also been action from the pharmaceutical industry, which released plans to make clinical trial data more transparent and publicly available. Released in October 2008, the revised Declaration of Helsinki, states that "Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject." [35] [36]

The World Health Organization maintains an international registry portal at http://apps.who.int/trialsearch/. [37] WHO states that the international registry's mission is "to ensure that a complete view of research is accessible to all those involved in health care decision making. This will improve research transparency and will ultimately strengthen the validity and value of the scientific evidence base." [38]

Since 2007, the International Committee of Medical Journal Editors ICMJE accepts all primary registries in the WHO network in addition to clinicaltrials.gov. Clinical trial registration in other registries excluding ClinicalTrials.gov has increased irrespective of study designs since 2014. [39]

Reporting compliance

Various studies have measured the extent to which various trials are in compliance with the reporting standards of their registry. [40] [41] [42] [43] [44]

Overview of clinical trial registries

Worldwide, there is growing number of registries. A 2013 study [45] identified the following top five registries (numbers updated as of August 2013):

1.ClinicalTrials.gov150,551
2.EU register21,060
3.Japan registries network (JPRN)12,728
4.ISRCTN11,794
5.Australia and New Zealand (ANZCTR)8,216

Overview of preclinical study registries

Similar to clinical research, preregistration can help to improve transparency and quality of research data in preclinical research. [46] [47] In contrast to clinical research where preregistration is mandatory for vast parts it is still new in preclinical research. A large part of preclinical and basic biomedical research relies on animal experiments. The non-publication of results gained from animal experiments not only distorts the state of research by reinforcing the publication bias, it further represents an ethical issue. [48] [49] Preregistration is discussed as a measure that could counteract this problem. Following registries are suited for the preregistration of preclinical studies.

1. Animalstudyregistry.org
2. As Predicted
3. OSF Registry
4. Preclinicaltrials.eu

Journal support

Over 200 journals offer a registered reports option (Centre for Open Science, 2019), [50] and the number of journals that are adopting registered reports is approximately doubling each year (Chambers et al., 2019). [51]

Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. [52] The editor-in-chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published.

Nature Human Behaviour has adopted the registered report format, as it “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”. [53]

European Journal of Personality defines this format: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.” [54]

Note that only a very small proportion of academic journals in psychology and neurosciences explicitly stated that they welcome submissions of replication studies in their aim and scope or instructions to authors. [55] [56] This phenomenon does not encourage the reporting or even attempt on replication studies.

Overall, the number of participating journals is increasing, as indicated by the Center for Open Science, which maintains a list of journals encouraging the submission of registered reports. [57]

Benefits

Several articles have outlined the rationale for preregistration (e.g., Lakens, 2019; Nosek et al., 2018; Wagenmakers et al., 2012). [6] [58] [1] The primary goal of preregistration is to improve the transparency of reported hypothesis tests, which allows readers to evaluate the extent to which decisions during the data analysis were pre-planned (maintaining statistical error control) or data-driven (increasing the Type 1 or Type 2 error rate).

Meta-scientific research has revealed additional benefits. Researchers indicate preregistering a study leads to a more carefully thought through research hypothesis, experimental design, and statistical analysis. [59] [60] In addition, preregistration has been shown to encourage better learning of Open Science concepts and students felt that they understood their dissertation and it improved the clarity of the manuscript writing, promoted rigour and were more likely to avoid questionable research practices. [61] [62] In addition, it becomes a tool that can supervisors can use to shape students to combat any questionable research practices. [63]

A 2024 study in the Journal of Political Economy: Microeconomics preregistration in economics journals found that preregistration did not reduce p-hacking and publication bias, unless the preregistration was accompanied by a preanalysis plan. [64]

Criticisms

Proponents of preregistration have argued that it is "a method to increase the credibility of published results" (Nosek & Lakens, 2014), that it "makes your science better by increasing the credibility of your results" (Centre for Open Science), and that it "improves the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). [1] [65] This argument assumes that non-preregistered exploratory analyses are less "credible" and/or "interpretable" than preregistered confirmatory analyses because they may involve "circular reasoning" in which post hoc hypotheses are based on the observed data (Nosek et al., 2018, p. 2600). [1] However, critics have argued that preregistration is not necessary to identify circular reasoning during exploratory analyses (Rubin, 2020). Circular reasoning can be identified by analysing the reasoning per se without needing to know whether that reasoning was preregistered. Critics have also noted that the idea that preregistration improves research credibility may deter researchers from undertaking non-preregistered exploratory analyses (Coffman & Niederle, 2015; see also Collins et al., 2021, Study 1). [66] [67] In response, preregistration advocates have stressed that exploratory analyses are permitted in preregistered studies, and that the results of these analyses retain some value vis-a-vis hypothesis generation rather than hypothesis testing. Preregistration merely makes the distinction between confirmatory and exploratory research clearer (Nosek et al., 2018; Nosek & Lakens, 2014; Wagenmakers et al., 2012). [1] [6] [65] Hence, although preregistraton is supposed to reduce researcher degrees of freedom during the data analysis stage, it is also supposed to be “a plan, not a prison” (Dehaven, 2017). [68] However, critics counterargue that, if preregistration is only supposed to be a plan, and not a prison, then researchers should feel free to deviate from that plan and undertake exploratory analyses without fearing accusations of low research credibility due to circular reasoning and inappropriate research practices such as p-hacking and unreported multiple testing that leads to inflated familywise error rates (e.g., Navarro, 2020). [69] Again, they have pointed out that preregistration is not necessary to address such concerns. For example, concerns about p-hacking and unreported multiple testing can be addressed if researchers engage in other open science practices, such as (a) open data and research materials and (b) robustness or multiverse analyses (Rubin, 2020; Steegen et al., 2016; for several other approaches, see Srivastava, 2018). [70] [71] [72] Finally, and more fundamentally, critics have argued that the distinction between confirmatory and exploratory analyses is unclear and/or irrelevant (Devezer et al., 2020; Rubin, 2020; Szollosi & Donkin, 2019), [73] [70] [74] and that concerns about inflated familywise error rates are unjustified when those error rates refer to abstract, atheoretical studywise hypotheses that are not being tested (Rubin, 2020, 2021; Szollosi et al., 2020). [70] [75] [76]

There are also concerns about the practical implementation of preregistration. Many preregistered protocols leave plenty of room for p-hacking (Bakker et al., 2020; Heirene et al., 2021; Ikeda et al., 2019; Singh et al., 2021; Van den Akker et al., 2023), [77] [78] [79] [80] [81] and researchers rarely follow the exact research methods and analyses that they preregister (Abrams et al., 2020; Claesen et al., 2019; Heirene et al., 2021; see also Boghdadly et al., 2018; Singh et al., 2021; Sun et al., 2019). [82] [83] [84] [85] [79] [80] For example, pre-registered studies are only of higher quality than non-pre-registered studies if the former has a power analysis and higher sample size than the latter but other than that they do not seem to prevent p-hacking and HARKing, as both the proportion of positive results and effect sizes are similar between preregistered and non-preregistered studies (Van den Akker et al., 2023). [81] In addition, a survey of 27 preregistered studies found that researchers deviated from their preregistered plans in all cases (Claesen et al., 2019). [83] The most frequent deviations were with regards to the planned sample size, exclusion criteria, and statistical model. Hence, what were intended as preregistered confirmatory tests ended up as unplanned exploratory tests. Again, preregistration advocates argue that deviations from preregistered plans are acceptable as long as they are reported transparently and justified. They also point out that even vague preregistrations help to reduce researcher degrees of freedom and make any residual flexibility transparent (Simmons et al., 2021, p. 180). [86] However, critics have argued that it is not useful to identify or justify deviations from preregistered plans when those plans do not reflect high quality theory and research practice. As Rubin (2020) explained, “we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses” (pp. 378–379). [70] In addition, pre-registering a study requires careful deliberation about the study's hypotheses, research design and statistical analyses. This depends on the use of pre-registration templates that provides detailed guidance on what to include and why (Bowman et al., 2016; Haven & Van Grootel, 2019; Van den Akker et al., 2021). [87] [88] [89] Many pre-registration template stress the importance of a power analysis but not only stress the importance of why the methodology was used. Additionally to the concerns raised about its practical implementation in quantitative research, critics have also argued that preregistration is less applicable, or even unsuitable, for qualitative research. [90] Pre-registration imposes rigidity, limiting researchers' ability to adapt to emerging data and evolving contexts, which are essential to capturing the richness of participants' lived experiences (Souza-Neto & Moyle, 2025). [91] Additionally, it conflicts with the inductive and flexible nature of theory-building in qualitative research, constraining the exploratory approach that is central to this methodology (Souza-Neto & Moyle, 2025). [91]

Finally, some commentators have argued that, under some circumstances, preregistration may actually harm science by providing a false sense of credibility to research studies and analyses (Devezer et al., 2020; McPhetres, 2020; Pham & Oh, 2020; Szollosi et al., 2020). [73] [92] [75] [93] Consistent with this view, there is some evidence that researchers view registered reports as being more credible than standard reports on a range of dimensions (Soderberg et al., 2020; see also Field et al., 2020 for inconclusive evidence), [94] [95] although it is unclear whether this represents a "false" sense of credibility due to pre-existing positive community attitudes about preregistration or a genuine causal effect of registered reports on quality of research.

See also

Related Research Articles

<span class="mw-page-title-main">Meta-analysis</span> Statistical method that summarizes and/or integrates data from multiple sources

Meta-analysis is a method of synthesis of quantitative data from multiple independent studies addressing a common research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. By combining these effect sizes the statistical power is improved and can resolve uncertainties or discrepancies found in individual studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review.

<span class="mw-page-title-main">Attention deficit hyperactivity disorder</span> Neurodevelopmental disorder

Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by executive dysfunction occasioning symptoms of inattention, hyperactivity, impulsivity and emotional dysregulation that are excessive and pervasive, impairing in multiple contexts, and developmentally-inappropriate.

<span class="mw-page-title-main">Randomized controlled trial</span> Form of scientific experiment

A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.

<span class="mw-page-title-main">Systematic review</span> Comprehensive review of research literature using systematic methods

A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic, then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.

A hierarchy of evidence, comprising levels of evidence (LOEs), that is, evidence levels (ELs), is a heuristic used to rank the relative strength of results obtained from experimental research, especially medical research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). Systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank the same as systematic review of completed high-quality observational studies in regard to the study of side effects. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

<span class="mw-page-title-main">Effects of meditation</span> Surveys & evaluates various meditative practices & evidence of neurophysiological benefits

The psychological and physiological effects of meditation have been studied. In recent years, studies of meditation have increasingly involved the use of modern instruments, such as functional magnetic resonance imaging and electroencephalography, which are able to observe brain physiology and neural activity in living subjects, either during the act of meditation itself or before and after meditation. Correlations can thus be established between meditative practices and brain structure or function.

Oral submucous fibrosis (OSF) is a chronic, complex, premalignant condition of the oral cavity, characterized by juxta-epithelial inflammatory reaction and progressive fibrosis of the submucosal tissues. As the disease progresses, the oral mucosa becomes fibrotic to the point that the person is unable to open the mouth. The condition is remotely linked to oral cancers and is associated with the chewing of areca nut and/or its byproducts, commonly practiced in South and South-East Asian countries. The incidence of OSF has also increased in western countries due to changing habits and population migration.

<span class="mw-page-title-main">John Ioannidis</span> Greek-American scientist (born 1965)

John P. A. Ioannidis is a Greek-American physician-scientist, writer and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies scientific research itself - in other words, meta-research - primarily in clinical medicine and the social sciences.

Patient participation is a trend that arose in answer to medical paternalism. Informed consent is a process where patients make decisions informed by the advice of medical professionals.

<span class="mw-page-title-main">Preferred Reporting Items for Systematic Reviews and Meta-Analyses</span> Scientific reporting standard

PRISMA is an evidence-based minimum set of items aimed at helping scientific authors to report a wide array of systematic reviews and meta-analyses, primarily used to assess the benefits and harms of a health care intervention. PRISMA focuses on ways in which authors can ensure a transparent and complete reporting of this type of research. The PRISMA standard superseded the earlier QUOROM standard. It offers the replicability of a systematic literature review. Researchers have to figure out research objectives that answer the research question, states the keywords, a set of exclusion and inclusion criteria. In the review stage, relevant articles were searched, irrelevant ones are removed. Articles are analyzed according to some pre-defined categories.

<span class="mw-page-title-main">AllTrials</span>

AllTrials is a project advocating that clinical research adopt the principles of open research. The project summarizes itself as "All trials registered, all results reported": that is, all clinical trials should be listed in a clinical trials registry, and their results should always be shared as open data.

<span class="mw-page-title-main">Replication crisis</span> Observed inability to reproduce scientific studies

The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.

<span class="mw-page-title-main">Brian Nosek</span> American social psychologist

Brian Arthur Nosek is an American social-cognitive psychologist, professor of psychology at the University of Virginia, and the co-founder and director of the Center for Open Science. He also co-founded the Society for the Improvement of Psychological Science and Project Implicit. He has been on the faculty of the University of Virginia since 2002.

Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."

Animal Study Registry is an online registry for the preregistration of research studies involving animals. Animal Study Registry was launched in January 2019 and can be used by scientists worldwide.

Crowdsourced science refers to collaborative contributions of a large group of people to the different steps of the research process in science. In psychology, the nature and scope of the collaborations can vary in their application and in the benefits it offers.

<span class="mw-page-title-main">Clinical Trials Registry – India</span> Indias clinical trial registry

Clinical Trials Registry – India (CTRI) is the government of India's official clinical trial registry. The National Institute of Medical Statistics of the Indian Council of Medical Research established the CTRI on 20 July 2007. Since 2009 the Central Drugs Standard Control Organisation has mandated that anyone conducting clinical trials in India must preregister before enrolling any research participants.

<span class="mw-page-title-main">Clinical trials in India</span>

Clinical trials in India refers to clinical research in India in which researchers test drugs and other treatments on research participants. NDCTR 2019 and section 3.7.1 to 3.7.3 of ICMR guidelines requires that all researchers conducting a clinical trial must publicly document it in the Clinical Trials Registry - India.

Research transparency is a major aspect of scientific research. It covers a variety of scientific principles and practices: reproducibility, data and code sharing, citation standards or verifiability.

References

  1. 1 2 3 4 5 6 Nosek, B. A.; Ebersole, C. R.; DeHaven, A. C.; Mellor, D. T. (2018). "The preregistration revolution". Proceedings of the National Academy of Sciences. 115 (11): 2600–2606. Bibcode:2018PNAS..115.2600N. doi: 10.1073/pnas.1708274114 . PMC   5856500 . PMID   29531091. S2CID   4639380.
  2. Parsons, Sam; Azevedo, Flávio; Elsherif, Mahmoud M.; Guay, Samuel; Shahim, Owen N.; Govaart, Gisela H.; Norris, Emma; O’Mahony, Aoife; Parker, Adam J.; Todorovic, Ana; Pennington, Charlotte R. (2022-02-21). "A community-sourced glossary of open scholarship terms". Nature Human Behaviour. 6 (3): 312–318. doi:10.1038/s41562-021-01269-4. hdl: 2292/62865 . ISSN   2397-3374. PMID   35190714. S2CID   247025114.
  3. "Registered Replication Reports". Association for Psychological Science. Retrieved 2015-11-13.
  4. "Preregistration". www.apa.org. Retrieved 2024-10-19.
  5. Hardwicke, Tom E.; Wagenmakers, Eric-Jan (January 2023). "Reducing bias, increasing transparency and calibrating confidence with preregistration". Nature Human Behaviour. 7 (1): 15–26. doi:10.1038/s41562-022-01497-2. ISSN   2397-3374.
  6. 1 2 3 Wagenmakers, E. J.; Wetzels, R.; Borsboom, D.; van der Maas, H. L.; Kievit, R. A. (2012). "An agenda for purely confirmatory research". Perspectives on Psychological Science. 7 (6): 632–638. doi:10.1177/1745691612463078. PMID   26168122. S2CID   5096417.
  7. Lakens, Daniël (2019). "The value of preregistration for psychological science: A conceptual analysis". Japanese Psychological Review. 62 (3): 221–230. doi:10.24602/sjpr.62.3_221.
  8. Chambers, Christopher D.; Tzavella, Loukia (January 2022). "The past, present and future of Registered Reports". Nature Human Behaviour. 6 (1): 29–42. doi:10.1038/s41562-021-01193-7. ISSN   2397-3374.
  9. Bakan, David (1966). "The test of significance in psychological research". Psychological Bulletin. 66 (6): 423–437. doi:10.1037/h0020412. PMID   5974619.
  10. Lakens, Daniël (14 May 2024). "When and How to Deviate From a Preregistration". Collabra: Psychology. 10 (1). doi:10.1525/collabra.117094.
  11. Chambers, Christopher D. (March 2013). "Registered Reports: A new publishing initiative at Cortex" (PDF). Cortex. 49 (3): 609–610. doi:10.1016/j.cortex.2012.12.016.
  12. Nosek, Brian A.; Lakens, Daniël (1 May 2014). "Registered Reports: A Method to Increase the Credibility of Published Results". Social Psychology. 45 (3): 137–141. doi:10.1027/1864-9335/a000192.
  13. Scheel, Anne M.; Schijen, Mitchell R. M. J.; Lakens, Daniël (April 2021). "An Excess of Positive Results: Comparing the Standard Psychology Literature With Registered Reports". Advances in Methods and Practices in Psychological Science. 4 (2): 251524592110074. doi:10.1177/25152459211007467.
  14. Allen, Christopher; Mehler, David M. A. (1 May 2019). "Open science challenges, benefits and tips in early career and beyond". PLOS Biology. 17 (5): e3000246. doi: 10.1371/journal.pbio.3000246 . PMC   6513108 . PMID   31042704.
  15. Bosnjak, M.; Fiebach, C. J.; Mellor, D.; Mueller, S.; O’Connor, D. B.; Oswald, F. L.; Sokol-Chang, R. I. (2021). [10.31234/osf.io/d7m5r "A template for preregistration of quantitative research in psychology: Report of the Joint Psychological Societies Preregistration Task Force"]. The American Psychologist. 77 (4): 602–615. doi:10.31234/osf.io/d7m5r. PMID   34807636. S2CID   236655778.{{cite journal}}: Check |url= value (help)
  16. Haven, T. L.; Van Grootel, D. L. (2019). "Preregistering qualitative research". Accountability in Research. 26 (3): 229–244. doi: 10.1080/08989621.2019.1580147 . PMID   30741570.
  17. Mertens, G.; Krypotos, A. M. (2019). "Preregistration of analyses of preexisting data". Psychologica Belgica. 59 (1): 338–352. doi: 10.5334/pb.493 . PMC   6706998 . PMID   31497308. S2CID   201844047.
  18. Weston, S. J.; Ritchie, S. J.; Rohrer, J. M. (2019). "Recommendations for increasing the transparency of analysis of preexisting data sets". Advances in Methods and Practices in Psychological Science. 2 (3): 214–227. doi: 10.1177/2515245919848684 . PMC   7079740 . PMID   32190814.
  19. Akker, Olmo R. van den; Weston, Sara; Campbell, Lorne; Chopik, Bill; Damian, Rodica; Davis-Kean, Pamela; Hall, Andrew; Kosie, Jessica; Kruse, Elliott; Olsen, Jerome; Ritchie, Stuart (2021-11-09). "Preregistration of secondary data analysis: A template and tutorial". Meta-Psychology. 5. doi: 10.15626/MP.2020.2625 . ISSN   2003-2714.
  20. Johnson, A. H.; Cook, B. G. (2019). "Preregistration in single-case design research". Exceptional Children. 86 (1): 95–112. doi: 10.1177/0014402919868529 . S2CID   204363608.
  21. Paul, M.; Govaart, G. H.; Schettino, A. (2021). "Making ERP research more transparent: Guidelines for preregistration". International Journal of Psychophysiology. 164: 52–63. doi: 10.31234/osf.io/4tgve . hdl: 21.11116/0000-0008-2B30-2 . PMID   33676957.
  22. Kirtley, O. J.; Lafit, G.; Achterhof, R.; Hiekkaranta, A. P.; Myin-Germeys, I. (2019). "Making the black box transparent: A template and tutorial for (pre-)registration of studies using experience sampling methods (ESM)". PsyArXiv. doi: 10.31234/osf.io/seyq7 . S2CID   236657420.
  23. Dirnagl, U. (2020). "Preregistration of exploratory research: Learning from the golden age of discovery". PLOS Biol. 18 (3): e3000690. doi: 10.1371/journal.pbio.3000690 . PMC   7098547 . PMID   32214315.
  24. Bert, Bettina; Heinl, Céline; Chmielewska, Justyna; Schwarz, Franziska; Grune, Barbara; Hensel, Andreas; Greiner, Matthias; Schönfelder, Gilbert (2019-10-15). "Refining animal research: The Animal Study Registry". PLOS Biology. 17 (10): e3000463. doi: 10.1371/journal.pbio.3000463 . ISSN   1545-7885. PMC   6793840 . PMID   31613875.
  25. "International Clinical Trials Registry Platform (ICTRP)". Who.int. Archived from the original on July 19, 2013. Retrieved 2017-06-23.
  26. "WHO | Working Group on Best Practice for Clinical Trials Registers (BPG)". Who.int. Archived from the original on October 12, 2008. Retrieved 2017-06-23.
  27. Barrett, Stephen (13 September 2004). "Major Journals Press for Clinical Trial Registration". www.quackwatch.org. Retrieved 22 May 2019.
  28. "WHO - Working Group on Best Practice for Clinical Trials Registers (BPG)". www.who.int. Archived from the original on September 17, 2008.
  29. Dickersin, K; Rennie, D (2009). "Registering clinical trials". JAMA. 290 (4): 516–523. doi:10.1001/jama.290.4.516. PMID   12876095. S2CID   10184671.
  30. Sterling, TD (1959). "Publication decisions and their possible effects on inferences drawn from tests of significances – or vice versa". J Am Stat Assoc. 54 (285): 30–34. doi:10.1080/01621459.1959.10501497. JSTOR   2282137.
  31. International Collaborative Group on Clinical Trial Registries (1993). "Position paper and consensus recommendations on clinical trial registries. Ad Hoc Working Party of the International Collaborative Group on Clinical Trials Registries". Clin Trials Metaanal. 28 (4–5): 255–266. PMID   10146333.
  32. Dickersin, K; Rennie, D (2012). "The evolution of trial registries and their use to assess the clinical trial enterprise". JAMA. 307 (17): 1861–4. doi:10.1001/jama.2012.4230. PMID   22550202.
  33. "SANCTR > Home". www.sanctr.gov.za.
  34. "ICMJE: Frequently Asked Questions about Clinical Trials Registration". Archived from the original on 2010-07-06. Retrieved 2010-07-23.
  35. "WMA Declaration of Helsinki - Ethical Principles for Medical Research Involving Human Subjects". Archived from the original on 2011-08-30. Retrieved 2010-09-02.
  36. "ANZCTR". www.anzctr.org.au.
  37. Gülmezoglu, AM; Pang, T; Horton, R; Dickersin, K (2005). "WHO facilitates international collaboration in setting standards for clinical trial registration". Lancet. 365 (9474): 1829–1831. doi:10.1016/s0140-6736(05)66589-0. PMID   15924966. S2CID   29203085.
  38. "International Clinical Trials Registry Platform (ICTRP)". World Health Organization.
  39. Banno, M; Tsujimoto, Y; Kataoka, Y (2019). "Studies registered in non-ClinicalTrials.gov accounted for an increasing proportion of protocol registrations in medical research". Journal of Clinical Epidemiology. 116: 106–113. doi:10.1016/j.jclinepi.2019.09.005. PMID   31521723. S2CID   202582999.
  40. Anderson, Monique L.; Chiswell, Karen; Peterson, Eric D.; Tasneem, Asba; Topping, James; Califf, Robert M. (12 March 2015). "Compliance with Results Reporting at ClinicalTrials.gov". New England Journal of Medicine. 372 (11): 1031–1039. doi:10.1056/NEJMsa1409364. PMC   4508873 . PMID   25760355.
  41. DeVito, Nicholas J; Bacon, Seb; Goldacre, Ben (February 2020). "Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study". The Lancet. 395 (10221): 361–369. doi:10.1016/S0140-6736(19)33220-9. PMID   31958402. S2CID   210704225.
  42. Pullar, T; Kumar, S; Feely, M (October 1989). "Compliance in clinical trials". Annals of the Rheumatic Diseases. 48 (10): 871–5. doi:10.1136/ard.48.10.871. PMC   1003898 . PMID   2684057.
  43. Miller, Jennifer E; Korn, David; Ross, Joseph S (12 November 2015). "Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012". BMJ Open. 5 (11): e009758. doi:10.1136/bmjopen-2015-009758. PMC   4654354 . PMID   26563214.
  44. Miseta, Ed (9 January 2018). "As ClinicalTrialsgov Turns 10 Will We See Compliance Improve". www.clinicalleader.com.
  45. Huser, V.; Cimino, J. J. (2013). "Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration". Journal of the American Medical Informatics Association. 20 (e1): e169–74. doi:10.1136/amiajnl-2012-001501. PMC   3715364 . PMID   23396544.
  46. Wieschowski, Susanne; Silva, Diego S.; Strech, Daniel (2016-11-10). "Animal Study Registries: Results from a Stakeholder Analysis on Potential Strengths, Weaknesses, Facilitators, and Barriers". PLOS Biology. 14 (11): e2000391. doi: 10.1371/journal.pbio.2000391 . ISSN   1545-7885. PMC   5104355 . PMID   27832101.
  47. Kimmelman, Jonathan; Anderson, James A. (June 2012). "Should preclinical studies be registered?". Nature Biotechnology. 30 (6): 488–489. doi:10.1038/nbt.2261. ISSN   1546-1696. PMC   4516408 . PMID   22678379.
  48. Wieschowski, Susanne; Biernot, Svenja; Deutsch, Susanne; Glage, Silke; Bleich, André; Tolba, René; Strech, Daniel (2019-11-26). "Publication rates in animal research. Extent and characteristics of published and non-published animal studies followed up at two German university medical centres". PLOS ONE. 14 (11): e0223758. Bibcode:2019PLoSO..1423758W. doi: 10.1371/journal.pone.0223758 . ISSN   1932-6203. PMC   6879110 . PMID   31770377.
  49. Naald, Mira van der; Wenker, Steven; Doevendans, Pieter A.; Wever, Kimberley E.; Chamuleau, Steven A. J. (2020-08-01). "Publication rate in preclinical research: a plea for preregistration". BMJ Open Science. 4 (1): e100051. doi:10.1136/bmjos-2019-100051. ISSN   2398-8703. PMC   8647586 . PMID   35047690.
  50. Centre for Open Science. "Registered Reports: Peer review before results are known to align scientific values and practices".
  51. Chambers, C. D.; Forstmann, B.; Pruszynski, J. A. (2019). "Science in flux: Registered Reports and beyond at the European Journal of Neuroscience". European Journal of Neuroscience. 49 (1): 4–5. doi: 10.1111/ejn.14319 . PMID   30584679. S2CID   58645509.
  52. Lindsay, D. Stephen (2015-11-09). "Replication in Psychological Science". Psychological Science. 26 (12): 1827–32. doi: 10.1177/0956797615616374 . ISSN   0956-7976. PMID   26553013.
  53. Mellor, D. (2017). "Promoting reproducibility with registered reports". Nature Human Behaviour. 1: 0034. doi: 10.1038/s41562-016-0034 . S2CID   28976450.
  54. "Streamlined review and registered reports soon to be official at EJP". 6 February 2018.
  55. Yeung, Andy W. K. (2017). "Do Neuroscience Journals Accept Replications? A Survey of Literature". Frontiers in Human Neuroscience. 11: 468. doi: 10.3389/fnhum.2017.00468 . ISSN   1662-5161. PMC   5611708 . PMID   28979201.
  56. Martin, G. N.; Clarke, Richard M. (2017). "Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices". Frontiers in Psychology. 8: 523. doi: 10.3389/fpsyg.2017.00523 . ISSN   1664-1078. PMC   5387793 . PMID   28443044.
  57. "Registered Reports Overview". Center for Open Science. Retrieved 2018-11-28.
  58. Lakens, D. (2019). "The value of preregistration for psychological science: A conceptual analysis" (PDF). Japanese Psychological Review. 62 (3): 221–230.
  59. Toth, Allison A.; Banks, George C.; Mellor, David; O’Boyle, Ernest H.; Dickson, Ashleigh; Davis, Daniel J.; DeHaven, Alex; Bochantin, Jaime; Borns, Jared (1 August 2021). "Study Preregistration: An Evaluation of a Method for Transparent Reporting". Journal of Business and Psychology. 36 (4): 553–571. doi:10.1007/s10869-020-09695-3.
  60. Sarafoglou, Alexandra; Kovacs, Marton; Bakos, Bence; Wagenmakers, Eric-Jan; Aczel, Balazs (July 2022). "A survey on how preregistration affects the research workflow: better science but more work". Royal Society Open Science. 9 (7). Bibcode:2022RSOS....911997S. doi:10.1098/rsos.211997. PMC   9257590 . PMID   35814910.
  61. Pownall, Madeleine; Pennington, Charlotte R.; Norris, Emma; Juanchich, Marie; Smailes, David; Russell, Sophie; Gooch, Debbie; Evans, Thomas Rhys; Persson, Sofia; Mak, Matthew H. C.; Tzavella, Loukia; Monk, Rebecca; Gough, Thomas; Benwell, Christopher S. Y.; Elsherif, Mahmoud (October 2023). "Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate Dissertation". Advances in Methods and Practices in Psychological Science. 6 (4). doi:10.1177/25152459231202724. ISSN   2515-2459.
  62. Pennington, Charlotte R. (2023). A student's guide to open science: using the replication crisis to reform psychology. Maidenhead: Open University Press. ISBN   978-0-335-25116-2.
  63. Krishna, Anand; Peter, Sebastian M. (2018-08-30). "Questionable research practices in student final theses – Prevalence, attitudes, and the role of the supervisor's perceived attitudes". PLOS ONE. 13 (8): e0203470. Bibcode:2018PLoSO..1303470K. doi: 10.1371/journal.pone.0203470 . ISSN   1932-6203. PMC   6117074 . PMID   30161249.
  64. Brodeur, Abel; Cook, Nikolai M.; Hartley, Jonathan S.; Heyes, Anthony (2024). "Do Preregistration and Preanalysis Plans Reduce p -Hacking and Publication Bias? Evidence from 15,992 Test Statistics and Suggestions for Improvement". Journal of Political Economy Microeconomics. 2 (3): 527–561. doi:10.1086/730455. ISSN   2832-9368.
  65. 1 2 Nosek, B. A.; Lakens, D. (2014). "Registered reports: A method to increase the credibility of published results". Social Psychology. 45 (3): 137–141. doi: 10.1027/1864-9335/a000192 .
  66. Coffman, L. C.; Niederle, M. (2015). "Pre-analysis plans have limited upside, especially where replications are feasible". Journal of Economic Perspectives. 29 (3): 81–98. doi: 10.1257/jep.29.3.81 . S2CID   18163762.
  67. Collins, H.K.; Whillans, A. V.; John, L. K (2021). "Joy and rigor in behavioral science". Organizational Behavior and Human Decision Processes. 164: 179–191. doi:10.1016/j.obhdp.2021.03.002. S2CID   234848511.
  68. Dehaven, A. "Preregistration: A plan, not a prison". Centre for Open Science. Retrieved 25 September 2020.
  69. Navarro, D. (2020). "Paths in strange spaces: A comment on preregistration". doi:10.31234/osf.io/wxn58. S2CID   236797452.{{cite journal}}: Cite journal requires |journal= (help)
  70. 1 2 3 4 Rubin, M. (2020). "Does preregistration improve the credibility of research findings?". The Quantitative Methods for Psychology. 16 (4): 376–390. arXiv: 2010.10513 . doi: 10.20982/tqmp.16.4.p376 . S2CID   221821323.
  71. Steegen, S.; Tuerlinckx, F.; Gelman, A.; Vanpaemel, W. (2016). "Increasing transparency through a multiverse analysis". Perspectives on Psychological Science. 11 (5): 702–712. doi: 10.1177/1745691616658637 . PMID   27694465.
  72. Srivastava, S. (2018). "Sound inference in complicated research: A multi-strategy approach". PsyArXiv. doi:10.31234/osf.io/bwr48. S2CID   86539993.
  73. 1 2 Devezer, B.; Navarro, D. J.; Vandekerckhove, J.; Buzbas, E. O. (2020). "The case for formal methodology in scientific reform" (PDF). bioRxiv: 2020.04.26.048306. doi: 10.1101/2020.04.26.048306 . S2CID   218466913.
  74. Szollosi, A.; Donkin, C. (2019). "Arrested theory development: The misguided distinction between exploratory and confirmatory research". doi:10.31234/osf.io/suzej.{{cite journal}}: Cite journal requires |journal= (help)
  75. 1 2 Szollosi, A.; Kellen, D.; Navarro, D. J.; Shiffrin, R.; van Rooji, I.; Van Zandt, T.; Donkin, C. (2020). "Is preregistration worthwhile?". Trends in Cognitive Sciences. 24 (2): 94–95. doi:10.1016/j.tics.2019.11.009. PMID   31892461. S2CID   209500379.
  76. Rubin, Mark (2021). "When to adjust alpha during multiple testing: A consideration of disjunction, conjunction, and individual testing". Synthese. 199 (3–4): 10969–11000. arXiv: 2107.02947 . doi:10.1007/s11229-021-03276-4. S2CID   235755301.
  77. Bakker, M.; Veldkamp, C. L. S.; van Assen, M. A. L. M.; Crompvoets, E. A. V.; Ong, H. H.; Nosek, B.; Soderberg, C. K.; Mellor, D.; Wicherts, J. M. (2020). "Ensuring the quality and specificity of preregistrations". PLOS Biol. 18 (12): e3000937. doi: 10.1371/journal.pbio.3000937 . PMC   7725296 . PMID   33296358.
  78. Ikeda, A.; Xu, H.; Fuji, N.; Zhu, S.; Yamada, Y. (2019). "Questionable research practices following pre-registration". Japanese Psychological Review. 62 (3): 281–295.
  79. 1 2 Singh, B.; Fairman, C. M.; Christensen, J. F.; Bolam, K. A.; Twomey, R.; Nunan, D.; Lahart, I. M. (2021). "Outcome reporting bias in exercise oncology trials (OREO): A cross-sectional study". medRxiv   10.1101/2021.03.12.21253378 .
  80. 1 2 Heirene, R.; LaPlante, D.; Louderback, E. R.; Keen, B.; Bakker, M.; Serafimovska, A.; Gainsbury, S. M. "Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison". PsyArXiv. Retrieved 17 July 2021.
  81. 1 2 van den Akker, Olmo R.; van Assen, Marcel A. L. M.; Bakker, Marjan; Elsherif, Mahmoud; Wong, Tsz Keung; Wicherts, Jelte M. (2023-11-10). "Preregistration in practice: A comparison of preregistered and non-preregistered studies in psychology". Behavior Research Methods. 56 (6): 5424–5433. doi: 10.3758/s13428-023-02277-0 . ISSN   1554-3528. PMC   11335781 . PMID   37950113. Creative Commons by small.svg  This article incorporates textfrom this source, which is available under the CC BY 4.0 license.
  82. Abrams, E.; Libgober, J.; List, J. A. (2020). "Research registries: Facts, myths, and possible improvements" (PDF). NBER Working Papers. 27250.
  83. 1 2 Claesen, A.; Gomes, S.; Tuerlinckx, F.; Vanpaemel, W.; Leuven, K. U. (2019). "Preregistration: Comparing dream to reality". Royal Society Open Science. 8 (10). doi: 10.31234/osf.io/d8wex . PMC   8548785 . PMID   34729209. S2CID   240688291.
  84. Boghdadly, K. El.; Wiles, M. D.; Atton, S.; Bailey, C. R. (2018). "Adherence to guidance on registration of randomised controlled trials published in Anaesthesia". Anaesthesia. 73 (5): 556–563. doi: 10.1111/anae.14103 . PMID   29292498.
  85. Sun, L. W.; Lee, D. J.; Collins, J. A.; Carll, T. C.; Ramahi, K.; Sandy, S. J.; Unteriner, J. G.; Weinberg, D. V. (2019). "Assessment of consistency between peer-reviewed publications and clinical trial registries". JAMA Ophthalmology. 137 (5): 552–556. doi: 10.1001/jamaophthalmol.2019.0312 . PMC   6512264 . PMID   30946427.
  86. Simmons, J. P.; Nelson, L. D.; Simonsohn, U. (2021). "Pre-registration is a game changer. But, like random assignment, it is neither necessary nor sufficient for credible science". Journal of Consumer Psychology. 31 (1): 177–180. doi:10.1002/jcpy.1207. S2CID   230629031.
  87. Bowman, Sara D.; Dehaven, Alexander Carl; Errington, Timothy M.; Hardwicke, Tom Elis; Mellor, David Thomas; Nosek, Brian A.; Soderberg, Courtney K. "OSF". osf.io. doi:10.31222/osf.io/epgjd. S2CID   242644091 . Retrieved 2023-11-12.
  88. L. Haven, Tamarinde; Van Grootel, Dr. Leonie (2019-04-03). "Preregistering qualitative research". Accountability in Research. 26 (3): 229–244. doi: 10.1080/08989621.2019.1580147 . ISSN   0898-9621. PMID   30741570.
  89. Akker, Olmo R. van den; Weston, Sara; Campbell, Lorne; Chopik, Bill; Damian, Rodica; Davis-Kean, Pamela; Hall, Andrew; Kosie, Jessica; Kruse, Elliott; Olsen, Jerome; Ritchie, Stuart; Valentine, K. D.; Veer, Anna van 't; Bakker, Marjan (2021-11-09). "Preregistration of secondary data analysis: A template and tutorial". Meta-Psychology. 5. doi: 10.15626/MP.2020.2625 . ISSN   2003-2714.
  90. Fischer, Eileen; Guzel, Gulay Taltekin (January 2023). "The case for qualitative research". Journal of Consumer Psychology. 33 (1): 259–272. doi:10.1002/jcpy.1300. ISSN   1057-7408.
  91. 1 2 Souza-Neto, Valério; Moyle, Brent (2025-04-01). "Preregistration is not a panacea, but why? A rejoinder to Chen & Li's (2024) "infusing preregistration into tourism research"". Tourism Management. 107: 105061. doi:10.1016/j.tourman.2024.105061. ISSN   0261-5177.
  92. McPhetres, J. (2020). "What should a preregistration contain?". doi:10.31234/osf.io/cj5mh. S2CID   236855127.{{cite journal}}: Cite journal requires |journal= (help)
  93. Pham, M. T.; Oh, T. T. (2020). "Preregistration is neither sufficient nor necessary for good science". Journal of Consumer Psychology. 31: 163–176. doi: 10.1002/jcpy.1209 .
  94. Field, S. M.; Wagenmakers, E. J.; Kiers, H. A.; Hoekstra, R.; Ernst, A.F.; van Ravenzwaaij, D. (2020). "The effect of preregistration on trust in empirical research findings: Results of a registered report". Royal Society Open Science. 7 (4): 181351. Bibcode:2020RSOS....781351F. doi: 10.1098/rsos.181351 . PMC   7211853 . PMID   32431853.
  95. Soderberg, C. K.; Errington, T. M.; Schiavone, S R.; Bottesini, J.; Singleton Thorn, F.; Vazire, S.; Esterling, K. M.; Nosek, B. A. (2020). "Research Quality of registered reports compared to the standard publishing model". doi:10.31222/osf.io/7x9vy. S2CID   242155160.{{cite journal}}: Cite journal requires |journal= (help)