Preregistration (science)

Last updated

Preregistration is the practice of registering the hypotheses, methods, and/or analyses of a scientific study before it is conducted. [1] [2] Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection. [3]

Contents

Preregistration assists in the identification and/or reduction of a variety of potentially problematic research practices, including p-hacking, publication bias, data dredging, inappropriate forms of post hoc analysis, and (relatedly) HARKing. It has recently gained prominence in the open science community as a potential solution to some of the issues that are thought to underlie the replication crisis. [1]

Types

Standard preregistration

In the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. Ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. This preregistration document is then posted on a publicly available website such as the Open Science Framework or AsPredicted. The preregistered study is then conducted, and a report of the study and its results is submitted for publication together with access to the (anonymised) preregistration document. This preregistration approach allows peer reviewers and subsequent readers to cross-reference the preregistration document with the published research article in order to identify (a) any “exploratory” tests that were not included in the preregistration document and (b) any suppressed tests that were included in the preregistered protocol but excluded from the final research report.

Registered reports

The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through Stage 1 peer review, publication of the findings is provisionally guaranteed. The associated study is then conducted, and the research report is submitted to Stage 2 peer review. Stage 2 peer review confirms that the actual research methods are consistent with the preregistered protocol and that quality thresholds are met (e.g., manipulation checks confirm the validity of the experimental manipulation). Studies that pass Stage 2 peer review are then published regardless of whether the results are confirming or disconfirming, significant or nonsignificant.

Hence, both preregistration and registered reports involve creating a time-stamped non-modifiable public record of the study and analysis plan before the data is collected. However, the study and analysis plan is only subjected to a formal peer review before data collection in the case of registered reports. [ citation needed ]

Specialised preregistration

Preregistration can be used in relation to a variety of different research designs and methods, including:

Clinical trial registration

Clinical trial registration is the practice of documenting clinical trials before they are performed in a clinical trials registry so as to combat publication bias and selective reporting. [14] Registration of clinical trials is required in some countries and is increasingly being standardized. [15] Some top medical journals will only publish the results of trials that have been pre-registered. [16]

A clinical trials registry is a platform which catalogs registered clinical trials. ClinicalTrials.gov, run by the United States National Library of Medicine (NLM) was the first online registry for clinical trials, and remains the largest and most widely used. In addition to combating bias, clinical trial registries serve to increase transparency and access to clinical trials for the public. Clinical trials registries are often searchable (e.g. by disease/indication, drug, location, etc.). Trials are registered by the pharmaceutical, biotech or medical device company (Sponsor) or by the hospital or foundation which is sponsoring the study, or by another organization, such as a contract research organization (CRO) which is running the study.

There has been a push from governments and international organizations, especially since 2005, to make clinical trial information more widely available and to standardize registries and processes of registering. The World Health Organization is working toward "achieving consensus on both the minimal and the optimal operating standards for trial registration". [17]

Creation and development

For many years, scientists and others have worried about reporting biases such that negative or null results from initiated clinical trials may be less likely to be published than positive results, thus skewing the literature and our understanding of how well interventions work. [18] This worry has been international and written about for over 50 years. [19] One of the proposals to address this potential bias was a comprehensive register of initiated clinical trials that would inform the public which trials had been started. [20] Ethical issues were those that seemed to interest the public most, as trialists (including those with potential commercial gain) benefited from those who enrolled in trials, but were not required to “give back,” telling the public what they had learned.

Those who were particularly concerned by the double standard were systematic reviewers, those who summarize what is known from clinical trials. If the literature is skewed, then the results of a systematic review are also likely to be skewed, possibly favoring the test intervention when in fact the accumulated data do not show this, if all data were made public.

ClinicalTrials.gov was originally developed largely as a result of breast cancer consumer lobbying, which led to authorizing language in the FDA Modernization Act of 1997 (Food and Drug Administration Modernization Act of 1997. Pub L No. 105-115, §113 Stat 2296), but the law provided neither funding nor a mechanism of enforcement. In addition, the law required that ClinicalTrials.gov only include trials of serious and life-threatening diseases.

Then, two events occurred in 2004 that increased public awareness of the problems of reporting bias. First, the then-New York State Attorney General Eliot Spitzer sued GlaxoSmithKline (GSK) because they had failed to reveal results from trials showing that certain antidepressants might be harmful. [21]

Shortly thereafter, the International Committee of Medical Journal Editors (ICMJE) announced that their journals would not publish reports of trials unless they had been registered. The ICMJE action was probably the most important motivator for trial registration, as investigators wanted to reserve the possibility that they could publish their results in prestigious journals, should they want to.

In 2007, the Food and Drug Administration Amendments Act of 2007 (FDAAA) clarified the requirements for registration and also set penalties for non-compliance (Public Law 110-85. The Food and Drug Administration Amendments Act of 2007 .

International participation

The International Committee of Medical Journal Editors (ICMJE) decided that from July 1, 2005 no trials will be considered for publication unless they are included on a clinical trials registry. [22] [23] The World Health Organization has begun the push for clinical trial registration with the initiation of the International Clinical Trials Registry Platform. There has also been action from the pharmaceutical industry, which released plans to make clinical trial data more transparent and publicly available. Released in October 2008, the revised Declaration of Helsinki, states that "Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject." [24] [25]

The World Health Organization maintains an international registry portal at http://apps.who.int/trialsearch/. [26] WHO states that the international registry's mission is "to ensure that a complete view of research is accessible to all those involved in health care decision making. This will improve research transparency and will ultimately strengthen the validity and value of the scientific evidence base." [27]

Since 2007, the International Committee of Medical Journal Editors ICMJE accepts all primary registries in the WHO network in addition to clinicaltrials.gov. Clinical trial registration in other registries excluding ClinicalTrials.gov has increased irrespective of study designs since 2014. [28]

Reporting compliance

Various studies have measured the extent to which various trials are in compliance with the reporting standards of their registry. [29] [30] [31] [32] [33]

Overview of clinical trial registries

Worldwide, there is growing number of registries. A 2013 study [34] identified the following top five registries (numbers updated as of August 2013):

1.ClinicalTrials.gov150,551
2.EU register21,060
3.Japan registries network (JPRN)12,728
4.ISRCTN11,794
5.Australia and New Zealand (ANZCTR)8,216

Overview of preclinical study registries

Similar to clinical research, preregistration can help to improve transparency and quality of research data in preclinical research. [35] [36] In contrast to clinical research where preregistration is mandatory for vast parts it is still new in preclinical research. A large part of preclinical and basic biomedical research relies on animal experiments. The non-publication of results gained from animal experiments not only distorts the state of research by reinforcing the publication bias, it further represents an ethical issue. [37] [38] Preregistration is discussed as a measure that could counteract this problem. Following registries are suited for the preregistration of preclinical studies.

1. Animalstudyregistry.org
2. As Predicted
3. OSF Registry
4. Preclinicaltrials.eu

Journal support

Over 200 journals offer a registered reports option (Centre for Open Science, 2019), [39] and the number of journals that are adopting registered reports is approximately doubling each year (Chambers et al., 2019). [40]

Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. [41] The editor-in-chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published.

Nature Human Behaviour has adopted the registered report format, as it “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”. [42]

European Journal of Personality defines this format: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.” [43]

Note that only a very small proportion of academic journals in psychology and neurosciences explicitly stated that they welcome submissions of replication studies in their aim and scope or instructions to authors. [44] [45] This phenomenon does not encourage the reporting or even attempt on replication studies.

Overall, the number of participating journals is increasing, as indicated by the Center for Open Science, which maintains a list of journals encouraging the submission of registered reports. [46]

Rationale

Several articles have outlined the rationale for preregistration (e.g., Lakens, 2019; Nosek et al., 2018; Wagenmakers et al., 2012). [47] [48] [1] As Rubin (2020, Table 1) summarized, preregistration helps to identify and/or curtail the following issues:

  1. Poorly planned hypotheses and tests
  2. HARKing: undisclosed hypothesizing after the results are known
  3. The suppression of a priori hypotheses that yield null or disconfirming results
  4. Deviations from planned analyses
  5. Lack of clarity between confirmatory and exploratory analyses
  6. Undisclosed multiple testing
  7. Forking paths, in which researchers make decisions about which tests to conduct based on information from their sample
  8. p-hacking: continuing data analysis until a significant p value is obtained
  9. Optional stopping: repeating the same test at different stages of data collection until a significant result is obtained
  10. Invalid use of p values, because p values lose their meaning in exploratory analyses
  11. Researchers’ biases, including the confirmation bias and hindsight bias
  12. Selective reporting of results: “cherry-picking” specific supportive results and suppressing non-supportive results
  13. Unclear test severity, preventing the identification of hypotheses that have a low probability of being confirmed when they are false
  14. Unreported null findings
  15. Publication bias: unpublished null findings, resulting in the file draw problem
  16. Potentially low replicability, ostensibly due to the use of questionable research practices (e.g., HARKing, p-hacking, optional stopping)

Identifying issues such as these via preregistration helps to improve "the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). [1] However, Rubin (2020) argued that only some of these issues are problematic and only under some conditions. [49] He also argued that, when they are problematic, preregistration is not necessary to identify these issues. Instead, they can be identified via (a) clear rationales for current hypotheses and analytical approaches, (b) public access to research data, materials, and code, and (c) demonstrations of the robustness of research conclusions to alternative interpretations and analytical approaches.

Criticisms

Proponents of preregistration have argued that it is "a method to increase the credibility of published results" (Nosek & Lakens, 2014), that it "makes your science better by increasing the credibility of your results" (Centre for Open Science), and that it "improves the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). [1] [50] This argument assumes that non-preregistered exploratory analyses are less "credible" and/or "interpretable" than preregistered confirmatory analyses because they may involve "circular reasoning" in which post hoc hypotheses are based on the observed data (Nosek et al., 2018, p. 2600). [1] However, critics have argued that preregistration is not necessary to identify circular reasoning during exploratory analyses (Rubin, 2020). Circular reasoning can be identified by analysing the reasoning per se without needing to know whether that reasoning was preregistered. Critics have also noted that the idea that preregistration improves research credibility may deter researchers from undertaking non-preregistered exploratory analyses (Coffman & Niederle, 2015; see also Collins et al., 2021, Study 1). [51] [52] In response, preregistration advocates have stressed that exploratory analyses are permitted in preregistered studies, and that the results of these analyses retain some value vis-a-vis hypothesis generation rather than hypothesis testing. Preregistration merely makes the distinction between confirmatory and exploratory research clearer (Nosek et al., 2018; Nosek & Lakens, 2014; Wagenmakers et al., 2012). [1] [47] [50] Hence, although preregistraton is supposed to reduce researcher degrees of freedom during the data analysis stage, it is also supposed to be “a plan, not a prison” (Dehaven, 2017). [53] However, critics counterargue that, if preregistration is only supposed to be a plan, and not a prison, then researchers should feel free to deviate from that plan and undertake exploratory analyses without fearing accusations of low research credibility due to circular reasoning and inappropriate research practices such as p-hacking and unreported multiple testing that leads to inflated familywise error rates (e.g., Navarro, 2020). [54] Again, they have pointed out that preregistration is not necessary to address such concerns. For example, concerns about p-hacking and unreported multiple testing can be addressed if researchers engage in other open science practices, such as (a) open data and research materials and (b) robustness or multiverse analyses (Rubin, 2020; Steegen et al., 2016; for several other approaches, see Srivastava, 2018). [49] [55] [56] Finally, and more fundamentally, critics have argued that the distinction between confirmatory and exploratory analyses is unclear and/or irrelevant (Devezer et al., 2020; Rubin, 2020; Szollosi & Donkin, 2019), [57] [49] [58] and that concerns about inflated familywise error rates are unjustified when those error rates refer to abstract, atheoretical studywise hypotheses that are not being tested (Rubin, 2020, 2021; Szollosi et al., 2020). [49] [59] [60]

There are also concerns about the practical implementation of preregistration. Many preregistered protocols leave plenty of room for p-hacking (Bakker et al., 2020; Heirene et al., 2021; Ikeda et al., 2019; Singh et al., 2021; Van den Akker et al., 2023), [61] [62] [63] [64] [65] and researchers rarely follow the exact research methods and analyses that they preregister (Abrams et al., 2020; Claesen et al., 2019; Heirene et al., 2021; see also Boghdadly et al., 2018; Singh et al., 2021; Sun et al., 2019). [66] [67] [68] [69] [63] [64] For example, pre-registered studies are only of higher quality than non-pre registered studies in that the former does has a power analysis and higher sample size than the latter but other than that they do not seem to prevent p-hacking and HARKing, as both the proportion of positive results and effect sizes are similar between preregistered and non-preregistered studies (Van den Akker et al., 2023). [65] In addition, in a survey of 27 preregistered studies found that researchers deviated from their preregistered plans in all cases (Claesen et al., 2019). [67] The most frequent deviations were with regards to the planned sample size, exclusion criteria, and statistical model. Hence, what were intended as preregistered confirmatory tests ended up as unplanned exploratory tests. Again, preregistration advocates argue that deviations from preregistered plans are acceptable as long as they are reported transparently and justified. They also point out that even vague preregistrations help to reduce researcher degrees of freedom and make any residual flexibility transparent (Simmons et al., 2021, p. 180). [70] However, critics have argued that it is not useful to identify or justify deviations from preregistered plans when those plans do not reflect high quality theory and research practice. As Rubin (2020) explained, “we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses” (pp. 378–379). [49] In addition, pre-registering a study requires careful deliberation about the study's hypotheses, research design and statistical analyses. This depends on the use of pre-registration templates that provides detailed guidance on what to include and why (Bowman et al., 2016; Haven & Van Grootel, 2019; Van den Akker et al., 2021). [71] [72] [73] Many pre-registration template stress the importance of a power analysis but not only stress the importance of why the methodology was used.

Finally, some commentators have argued that, under some circumstances, preregistration may actually harm science by providing a false sense of credibility to research studies and analyses (Devezer et al., 2020; McPhetres, 2020; Pham & Oh, 2020; Szollosi et al., 2020). [57] [74] [59] [75] Consistent with this view, there is some evidence that researchers view registered reports as being more credible than standard reports on a range of dimensions (Soderberg et al., 2020; see also Field et al., 2020 for inconclusive evidence), [76] [77] although it is unclear whether this represents a "false" sense of credibility due to pre-existing positive community attitudes about preregistration or a genuine causal effect of registered reports on quality of research.

See also

Related Research Articles

<span class="mw-page-title-main">Gene therapy</span> Medical field

Gene therapy is a medical technology that aims to produce a therapeutic effect through the manipulation of gene expression or through altering the biological properties of living cells.

<span class="mw-page-title-main">Meta-analysis</span> Statistical method that summarizes data from multiple sources

Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing an effect size across all of the studies; this involves extracting effect sizes and variance measures from various studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works.

<span class="mw-page-title-main">Randomized controlled trial</span> Form of scientific experiment

A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures or other medical treatments.

<span class="mw-page-title-main">Cochrane Library</span> Collection of databases in medicine and other healthcare specialties

The Cochrane Library is a collection of databases in medicine and other healthcare specialties provided by Cochrane and other organizations. At its core is the collection of Cochrane Reviews, a database of systematic reviews and meta-analyses which summarize and interpret the results of medical research. The Cochrane Library aims to make the results of well-conducted controlled trials readily available and is a key resource in evidence-based medicine.

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.

<span class="mw-page-title-main">Systematic review</span> Comprehensive review of research literature using systematic methods

A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic, then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.

A hierarchy of evidence, comprising levels of evidence (LOEs), that is, evidence levels (ELs), is a heuristic used to rank the relative strength of results obtained from experimental research, especially medical research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). Systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank the same as systematic review of completed high-quality observational studies in regard to the study of side effects. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

Clinical trials are medical research studies conducted on human subjects. The human subjects are assigned to one or more interventions, and the investigators evaluate the effects of those interventions. The progress and results of clinical trials are analyzed statistically.

<span class="mw-page-title-main">Ticagrelor</span> Coronary medication

Ticagrelor, sold under the brand name Brilinta among others, is a medication used for the prevention of stroke, heart attack and other events in people with acute coronary syndrome, meaning problems with blood supply in the coronary arteries. It acts as a platelet aggregation inhibitor by antagonising the P2Y12 receptor. The drug is produced by AstraZeneca.

Oral submucous fibrosis (OSF) is a chronic, complex, premalignant condition of the oral cavity, characterized by juxta-epithelial inflammatory reaction and progressive fibrosis of the submucosal tissues. As the disease progresses, the oral mucosa becomes fibrotic to the point that the person is unable to open the mouth. The condition is remotely linked to oral cancers and is associated with the chewing of areca nut and/or its byproducts, commonly practiced in South and South-East Asian countries. The incidence of OSF has also increased in western countries due to changing habits and population migration.

<span class="mw-page-title-main">John Ioannidis</span> American scientist (born 1965)

John P. A. Ioannidis is a Greek-American physician-scientist, writer and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies scientific research itself, meta-research primarily in clinical medicine and the social sciences.

The PhenX Toolkit is a web-based catalog of high-priority measures related to complex diseases, phenotypic traits and environmental exposures. These measures were selected by working groups of experts using a consensus process. PhenX Toolkit's mission is to provide investigators with standard measurement protocols for use in genomic, epidemiologic, clinical and translational research. Use of PhenX measures facilitates combining data from a variety of studies, and makes it easy for investigators to expand a study design beyond the primary research focus. The Toolkit is funded by the National Human Genome Research Institute (NHGRI) of the National Institutes of Health (NIH) with co-funding by the Office of the Director (OD), the National Institute of Neurological Disorders and Stroke (NINDS), and the National Heart, Lung, and Blood Institute (NHLBI). Continuously funded since 2007, PhenX has received funding from a variety of NIH institutes, including the National Institute on Drug Abuse (NIDA), the National Institute on Mental Health (NIMH), the National Cancer Institute (NCI) and the National Institute on Minority Health and Health Disparities (NIMHD). The PhenX Toolkit is available to the scientific community at no cost.

<span class="mw-page-title-main">AllTrials</span>

AllTrials is a project advocating that clinical research adopt the principles of open research. The project summarizes itself as "All trials registered, all results reported": that is, all clinical trials should be listed in a clinical trials registry, and their results should always be shared as open data.

<span class="mw-page-title-main">Replication crisis</span> Observed inability to reproduce scientific studies

The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.

Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."

Animal Study Registry is an online registry for the preregistration of research studies involving animals. Animal Study Registry was launched in January 2019 and can be used by scientists worldwide.

Crowdsourced science refers to collaborative contributions of a large group of people to the different steps of the research process in science. In psychology, the nature and scope of the collaborations can vary in their application and in the benefits it offers.

<span class="mw-page-title-main">Clinical Trials Registry – India</span> Indias clinical trial registry

Clinical Trials Registry – India (CTRI) is the government of India's official clinical trial registry. The National Institute of Medical Statistics of the Indian Council of Medical Research established the CTRI on 20 July 2007. Since 2009 the Central Drugs Standard Control Organization has mandated that anyone conducting clinical trials in India must preregister before enrolling any research participants.

<span class="mw-page-title-main">Clinical trials in India</span>

Clinical trials in India refers to clinical research in India in which researchers test drugs and other treatments on research participants. NDCTR 2019 and section 3.7.1 to 3.7.3 of ICMR guidelines requires that all researchers conducting a clinical trial must publicly document it in the Clinical Trials Registry - India.

Research transparency is major aspect of scientific research. It covers a variety of scientific principles and practices: reproducibility, data and code sharing, citation standards or verifiability.

References

  1. 1 2 3 4 5 6 7 Nosek, B. A.; Ebersole, C. R.; DeHaven, A. C.; Mellor, D. T. (2018). "The preregistration revolution". Proceedings of the National Academy of Sciences. 115 (11): 2600–2606. Bibcode:2018PNAS..115.2600N. doi: 10.1073/pnas.1708274114 . PMC   5856500 . PMID   29531091. S2CID   4639380.
  2. Parsons, Sam; Azevedo, Flávio; Elsherif, Mahmoud M.; Guay, Samuel; Shahim, Owen N.; Govaart, Gisela H.; Norris, Emma; O’Mahony, Aoife; Parker, Adam J.; Todorovic, Ana; Pennington, Charlotte R. (2022-02-21). "A community-sourced glossary of open scholarship terms". Nature Human Behaviour. 6 (3): 312–318. doi:10.1038/s41562-021-01269-4. ISSN   2397-3374. PMID   35190714. S2CID   247025114.
  3. "Registered Replication Reports". Association for Psychological Science. Retrieved 2015-11-13.
  4. Bosnjak, M.; Fiebach, C. J.; Mellor, D.; Mueller, S.; O’Connor, D. B.; Oswald, F. L.; Sokol-Chang, R. I. (2021). "A template for preregistration of quantitative research in psychology: Report of the Joint Psychological Societies Preregistration Task Force". The American Psychologist. 77 (4): 602–615. doi:10.31234/osf.io/d7m5r. PMID   34807636. S2CID   236655778.
  5. Haven, T. L.; Van Grootel, D. L. (2019). "Preregistering qualitative research". Accountability in Research. 26 (3): 229–244. doi: 10.1080/08989621.2019.1580147 . PMID   30741570.
  6. Mertens, G.; Krypotos, A. M. (2019). "Preregistration of analyses of preexisting data". Psychologica Belgica. 59 (1): 338–352. doi: 10.5334/pb.493 . PMC   6706998 . PMID   31497308. S2CID   201844047.
  7. Weston, S. J.; Ritchie, S. J.; Rohrer, J. M. (2019). "Recommendations for increasing the transparency of analysis of preexisting data sets". Advances in Methods and Practices in Psychological Science. 2 (3): 214–227. doi: 10.1177/2515245919848684 . PMC   7079740 . PMID   32190814.
  8. Akker, Olmo R. van den; Weston, Sara; Campbell, Lorne; Chopik, Bill; Damian, Rodica; Davis-Kean, Pamela; Hall, Andrew; Kosie, Jessica; Kruse, Elliott; Olsen, Jerome; Ritchie, Stuart (2021-11-09). "Preregistration of secondary data analysis: A template and tutorial". Meta-Psychology. 5. doi: 10.15626/MP.2020.2625 . ISSN   2003-2714.
  9. Johnson, A. H.; Cook, B. G. (2019). "Preregistration in single-case design research". Exceptional Children. 86 (1): 95–112. doi: 10.1177/0014402919868529 . S2CID   204363608.
  10. Paul, M.; Govaart, G. H.; Schettino, A. (2021). "Making ERP research more transparent: Guidelines for preregistration". International Journal of Psychophysiology. 164: 52–63. doi: 10.31234/osf.io/4tgve . hdl: 21.11116/0000-0008-2B30-2 . PMID   33676957.
  11. Kirtley, O. J.; Lafit, G.; Achterhof, R.; Hiekkaranta, A. P.; Myin-Germeys, I. (2019). "Making the black box transparent: A template and tutorial for (pre-)registration of studies using experience sampling methods (ESM)". PsyArXiv. doi: 10.31234/osf.io/seyq7 . S2CID   236657420.
  12. Dirnagl, U. (2020). "Preregistration of exploratory research: Learning from the golden age of discovery". PLOS Biol. 18 (3): e3000690. doi: 10.1371/journal.pbio.3000690 . PMC   7098547 . PMID   32214315.
  13. Bert, Bettina; Heinl, Céline; Chmielewska, Justyna; Schwarz, Franziska; Grune, Barbara; Hensel, Andreas; Greiner, Matthias; Schönfelder, Gilbert (2019-10-15). "Refining animal research: The Animal Study Registry". PLOS Biology. 17 (10): e3000463. doi: 10.1371/journal.pbio.3000463 . ISSN   1545-7885. PMC   6793840 . PMID   31613875.
  14. "International Clinical Trials Registry Platform (ICTRP)". Who.int. Archived from the original on July 19, 2013. Retrieved 2017-06-23.
  15. "WHO | Working Group on Best Practice for Clinical Trials Registers (BPG)". Who.int. Archived from the original on October 12, 2008. Retrieved 2017-06-23.
  16. Barrett, Stephen (13 September 2004). "Major Journals Press for Clinical Trial Registration". www.quackwatch.org. Retrieved 22 May 2019.
  17. "WHO - Working Group on Best Practice for Clinical Trials Registers (BPG)". www.who.int. Archived from the original on September 17, 2008.
  18. Dickersin, K; Rennie, D (2009). "Registering clinical trials". JAMA. 290 (4): 516–523. doi:10.1001/jama.290.4.516. PMID   12876095. S2CID   10184671.
  19. Sterling, TD (1959). "Publication decisions and their possible effects on inferences drawn from tests of significances – or vice versa". J Am Stat Assoc. 54 (285): 30–34. doi:10.1080/01621459.1959.10501497. JSTOR   2282137.
  20. International Collaborative Group on Clinical Trial Registries (1993). "Position paper and consensus recommendations on clinical trial registries. Ad Hoc Working Party of the International Collaborative Group on Clinical Trials Registries". Clin Trials Metaanal. 28 (4–5): 255–266. PMID   10146333.
  21. Dickersin, K; Rennie, D (2012). "The evolution of trial registries and their use to assess the clinical trial enterprise". JAMA. 307 (17): 1861–4. doi:10.1001/jama.2012.4230. PMID   22550202.
  22. SANCTR. "SANCTR > Home". www.sanctr.gov.za.
  23. "ICMJE: Frequently Asked Questions about Clinical Trials Registration". Archived from the original on 2010-07-06. Retrieved 2010-07-23.
  24. "WMA Declaration of Helsinki - Ethical Principles for Medical Research Involving Human Subjects". Archived from the original on 2011-08-30. Retrieved 2010-09-02.
  25. "ANZCTR". www.anzctr.org.au.
  26. Gülmezoglu, AM; Pang, T; Horton, R; Dickersin, K (2005). "WHO facilitates international collaboration in setting standards for clinical trial registration". Lancet. 365 (9474): 1829–1831. doi:10.1016/s0140-6736(05)66589-0. PMID   15924966. S2CID   29203085.
  27. "International Clinical Trials Registry Platform (ICTRP)". World Health Organization.
  28. Banno, M; Tsujimoto, Y; Kataoka, Y (2019). "Studies registered in non-ClinicalTrials.gov accounted for an increasing proportion of protocol registrations in medical research". Journal of Clinical Epidemiology. 116: 106–113. doi:10.1016/j.jclinepi.2019.09.005. PMID   31521723. S2CID   202582999.
  29. Anderson, Monique L.; Chiswell, Karen; Peterson, Eric D.; Tasneem, Asba; Topping, James; Califf, Robert M. (12 March 2015). "Compliance with Results Reporting at ClinicalTrials.gov". New England Journal of Medicine. 372 (11): 1031–1039. doi:10.1056/NEJMsa1409364. PMC   4508873 . PMID   25760355.
  30. DeVito, Nicholas J; Bacon, Seb; Goldacre, Ben (February 2020). "Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study". The Lancet. 395 (10221): 361–369. doi:10.1016/S0140-6736(19)33220-9. PMID   31958402. S2CID   210704225.
  31. Pullar, T; Kumar, S; Feely, M (October 1989). "Compliance in clinical trials". Annals of the Rheumatic Diseases. 48 (10): 871–5. doi:10.1136/ard.48.10.871. PMC   1003898 . PMID   2684057.
  32. Miller, Jennifer E; Korn, David; Ross, Joseph S (12 November 2015). "Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012". BMJ Open. 5 (11): e009758. doi:10.1136/bmjopen-2015-009758. PMC   4654354 . PMID   26563214.
  33. Miseta, Ed (9 January 2018). "As ClinicalTrialsgov Turns 10 Will We See Compliance Improve". www.clinicalleader.com.
  34. Huser, V.; Cimino, J. J. (2013). "Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration". Journal of the American Medical Informatics Association. 20 (e1): e169–74. doi:10.1136/amiajnl-2012-001501. PMC   3715364 . PMID   23396544.
  35. Wieschowski, Susanne; Silva, Diego S.; Strech, Daniel (2016-11-10). "Animal Study Registries: Results from a Stakeholder Analysis on Potential Strengths, Weaknesses, Facilitators, and Barriers". PLOS Biology. 14 (11): e2000391. doi: 10.1371/journal.pbio.2000391 . ISSN   1545-7885. PMC   5104355 . PMID   27832101.
  36. Kimmelman, Jonathan; Anderson, James A. (June 2012). "Should preclinical studies be registered?". Nature Biotechnology. 30 (6): 488–489. doi:10.1038/nbt.2261. ISSN   1546-1696. PMC   4516408 . PMID   22678379.
  37. Wieschowski, Susanne; Biernot, Svenja; Deutsch, Susanne; Glage, Silke; Bleich, André; Tolba, René; Strech, Daniel (2019-11-26). "Publication rates in animal research. Extent and characteristics of published and non-published animal studies followed up at two German university medical centres". PLOS ONE. 14 (11): e0223758. Bibcode:2019PLoSO..1423758W. doi: 10.1371/journal.pone.0223758 . ISSN   1932-6203. PMC   6879110 . PMID   31770377.
  38. Naald, Mira van der; Wenker, Steven; Doevendans, Pieter A.; Wever, Kimberley E.; Chamuleau, Steven A. J. (2020-08-01). "Publication rate in preclinical research: a plea for preregistration". BMJ Open Science. 4 (1): e100051. doi:10.1136/bmjos-2019-100051. ISSN   2398-8703. PMC   8647586 . PMID   35047690.
  39. Centre for Open Science. "Registered Reports: Peer review before results are known to align scientific values and practices".
  40. Chambers, C. D.; Forstmann, B.; Pruszynski, J. A. (2019). "Science in flux: Registered Reports and beyond at the European Journal of Neuroscience". European Journal of Neuroscience. 49 (1): 4–5. doi: 10.1111/ejn.14319 . PMID   30584679. S2CID   58645509.
  41. Lindsay, D. Stephen (2015-11-09). "Replication in Psychological Science". Psychological Science. 26 (12): 1827–32. doi: 10.1177/0956797615616374 . ISSN   0956-7976. PMID   26553013.
  42. Mellor, D. (2017). "Promoting reproducibility with registered reports". Nature Human Behaviour. 1: 0034. doi: 10.1038/s41562-016-0034 . S2CID   28976450.
  43. "Streamlined review and registered reports soon to be official at EJP". 6 February 2018.
  44. Yeung, Andy W. K. (2017). "Do Neuroscience Journals Accept Replications? A Survey of Literature". Frontiers in Human Neuroscience. 11: 468. doi: 10.3389/fnhum.2017.00468 . ISSN   1662-5161. PMC   5611708 . PMID   28979201.
  45. Martin, G. N.; Clarke, Richard M. (2017). "Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices". Frontiers in Psychology. 8: 523. doi: 10.3389/fpsyg.2017.00523 . ISSN   1664-1078. PMC   5387793 . PMID   28443044.
  46. "Registered Reports Overview". Center for Open Science. Retrieved 2018-11-28.
  47. 1 2 Wagenmakers, E. J.; Wetzels, R.; Borsboom, D.; van der Maas, H. L.; Kievit, R. A. (2012). "An agenda for purely confirmatory research". Perspectives on Psychological Science. 7 (6): 632–638. doi:10.1177/1745691612463078. PMID   26168122. S2CID   5096417.
  48. Lakens, D. (2019). "The value of preregistration for psychological science: A conceptual analysis" (PDF). Japanese Psychological Review. 62 (3): 221–230.
  49. 1 2 3 4 5 Rubin, M. (2020). "Does preregistration improve the credibility of research findings?". The Quantitative Methods for Psychology. 16 (4): 376–390. arXiv: 2010.10513 . doi: 10.20982/tqmp.16.4.p376 . S2CID   221821323.
  50. 1 2 Nosek, B. A.; Lakens, D. (2014). "Registered reports: A method to increase the credibility of published results". Social Psychology. 45 (3): 137–141. doi: 10.1027/1864-9335/a000192 .
  51. Coffman, L. C.; Niederle, M. (2015). "Pre-analysis plans have limited upside, especially where replications are feasible". Journal of Economic Perspectives. 29 (3): 81–98. doi: 10.1257/jep.29.3.81 . S2CID   18163762.
  52. Collins, H.K.; Whillans, A. V.; John, L. K (2021). "Joy and rigor in behavioral science". Organizational Behavior and Human Decision Processes. 164: 179–191. doi:10.1016/j.obhdp.2021.03.002. S2CID   234848511.
  53. Dehaven, A. "Preregistration: A plan, not a prison". Centre for Open Science. Retrieved 25 September 2020.
  54. Navarro, D. (2020). "Paths in strange spaces: A comment on preregistration". doi:10.31234/osf.io/wxn58. S2CID   236797452.{{cite journal}}: Cite journal requires |journal= (help)
  55. Steegen, S.; Tuerlinckx, F.; Gelman, A.; Vanpaemel, W. (2016). "Increasing transparency through a multiverse analysis". Perspectives on Psychological Science. 11 (5): 702–712. doi: 10.1177/1745691616658637 . PMID   27694465.
  56. Srivastava, S. (2018). "Sound inference in complicated research: A multi-strategy approach". PsyArXiv. doi:10.31234/osf.io/bwr48. S2CID   86539993.
  57. 1 2 Devezer, B.; Navarro, D. J.; Vandekerckhove, J.; Buzbas, E. O. (2020). "The case for formal methodology in scientific reform" (PDF). bioRxiv: 2020.04.26.048306. doi: 10.1101/2020.04.26.048306 . S2CID   218466913.
  58. Szollosi, A.; Donkin, C. (2019). "Arrested theory development: The misguided distinction between exploratory and confirmatory research". doi:10.31234/osf.io/suzej.{{cite journal}}: Cite journal requires |journal= (help)
  59. 1 2 Szollosi, A.; Kellen, D.; Navarro, D. J.; Shiffrin, R.; van Rooji, I.; Van Zandt, T.; Donkin, C. (2020). "Is preregistration worthwhile?". Trends in Cognitive Sciences. 24 (2): 94–95. doi:10.1016/j.tics.2019.11.009. PMID   31892461. S2CID   209500379.
  60. Rubin, Mark (2021). "When to adjust alpha during multiple testing: A consideration of disjunction, conjunction, and individual testing". Synthese. 199 (3–4): 10969–11000. arXiv: 2107.02947 . doi:10.1007/s11229-021-03276-4. S2CID   235755301.
  61. Bakker, M.; Veldkamp, C. L. S.; van Assen, M. A. L. M.; Crompvoets, E. A. V.; Ong, H. H.; Nosek, B.; Soderberg, C. K.; Mellor, D.; Wicherts, J. M. (2020). "Ensuring the quality and specificity of preregistrations". PLOS Biol. 18 (12): e3000937. doi: 10.1371/journal.pbio.3000937 . PMC   7725296 . PMID   33296358.
  62. Ikeda, A.; Xu, H.; Fuji, N.; Zhu, S.; Yamada, Y. (2019). "Questionable research practices following pre-registration". Japanese Psychological Review. 62 (3): 281–295.
  63. 1 2 Singh, B.; Fairman, C. M.; Christensen, J. F.; Bolam, K. A.; Twomey, R.; Nunan, D.; Lahart, I. M. (2021). "Outcome reporting bias in exercise oncology trials (OREO): A cross-sectional study". medRxiv   10.1101/2021.03.12.21253378 .
  64. 1 2 Heirene, R.; LaPlante, D.; Louderback, E. R.; Keen, B.; Bakker, M.; Serafimovska, A.; Gainsbury, S. M. "Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison". PsyArXiv. Retrieved 17 July 2021.
  65. 1 2 van den Akker, Olmo R.; van Assen, Marcel A. L. M.; Bakker, Marjan; Elsherif, Mahmoud; Wong, Tsz Keung; Wicherts, Jelte M. (2023-11-10). "Preregistration in practice: A comparison of preregistered and non-preregistered studies in psychology". Behavior Research Methods. doi: 10.3758/s13428-023-02277-0 . ISSN   1554-3528. PMID   37950113. Creative Commons by small.svg  This article incorporates textfrom this source, which is available under the CC BY 4.0 license.
  66. Abrams, E.; Libgober, J.; List, J. A. (2020). "Research registries: Facts, myths, and possible improvements" (PDF). NBER Working Papers. 27250.
  67. 1 2 Claesen, A.; Gomes, S.; Tuerlinckx, F.; Vanpaemel, W.; Leuven, K. U. (2019). "Preregistration: Comparing dream to reality". Royal Society Open Science. 8 (10). doi: 10.31234/osf.io/d8wex . PMC   8548785 . PMID   34729209. S2CID   240688291.
  68. Boghdadly, K. El.; Wiles, M. D.; Atton, S.; Bailey, C. R. (2018). "Adherence to guidance on registration of randomised controlled trials published in Anaesthesia". Anaesthesia. 73 (5): 556–563. doi: 10.1111/anae.14103 . PMID   29292498.
  69. Sun, L. W.; Lee, D. J.; Collins, J. A.; Carll, T. C.; Ramahi, K.; Sandy, S. J.; Unteriner, J. G.; Weinberg, D. V. (2019). "Assessment of consistency between peer-reviewed publications and clinical trial registries". JAMA Ophthalmology. 137 (5): 552–556. doi: 10.1001/jamaophthalmol.2019.0312 . PMC   6512264 . PMID   30946427.
  70. Simmons, J. P.; Nelson, L. D.; Simonsohn, U. (2021). "Pre-registration is a game changer. But, like random assignment, it is neither necessary nor sufficient for credible science". Journal of Consumer Psychology. 31 (1): 177–180. doi:10.1002/jcpy.1207. S2CID   230629031.
  71. Bowman, Sara D.; Dehaven, Alexander Carl; Errington, Timothy M.; Hardwicke, Tom Elis; Mellor, David Thomas; Nosek, Brian A.; Soderberg, Courtney K. "OSF". osf.io. doi:10.31222/osf.io/epgjd. S2CID   242644091 . Retrieved 2023-11-12.
  72. L. Haven, Tamarinde; Van Grootel, Dr. Leonie (2019-04-03). "Preregistering qualitative research". Accountability in Research. 26 (3): 229–244. doi: 10.1080/08989621.2019.1580147 . ISSN   0898-9621. PMID   30741570.
  73. Akker, Olmo R. van den; Weston, Sara; Campbell, Lorne; Chopik, Bill; Damian, Rodica; Davis-Kean, Pamela; Hall, Andrew; Kosie, Jessica; Kruse, Elliott; Olsen, Jerome; Ritchie, Stuart; Valentine, K. D.; Veer, Anna van 't; Bakker, Marjan (2021-11-09). "Preregistration of secondary data analysis: A template and tutorial". Meta-Psychology. 5. doi: 10.15626/MP.2020.2625 . ISSN   2003-2714.
  74. McPhetres, J. (2020). "What should a preregistration contain?". doi:10.31234/osf.io/cj5mh. S2CID   236855127.{{cite journal}}: Cite journal requires |journal= (help)
  75. Pham, M. T.; Oh, T. T. (2020). "Preregistration is neither sufficient nor necessary for good science". Journal of Consumer Psychology. 31: 163–176. doi: 10.1002/jcpy.1209 .
  76. Field, S. M.; Wagenmakers, E. J.; Kiers, H. A.; Hoekstra, R.; Ernst, A.F.; van Ravenzwaaij, D. (2020). "The effect of preregistration on trust in empirical research findings: Results of a registered report". Royal Society Open Science. 7 (4): 181351. Bibcode:2020RSOS....781351F. doi: 10.1098/rsos.181351 . PMC   7211853 . PMID   32431853.
  77. Soderberg, C. K.; Errington, T. M.; Schiavone, S R.; Bottesini, J.; Singleton Thorn, F.; Vazire, S.; Esterling, K. M.; Nosek, B. A. (2020). "Research Quality of registered reports compared to the standard publishing model". doi:10.31222/osf.io/7x9vy. S2CID   242155160.{{cite journal}}: Cite journal requires |journal= (help)