Peer review

Last updated

A reviewer at the American National Institutes of Health evaluates a grant proposal. ScientificReview.jpg
A reviewer at the American National Institutes of Health evaluates a grant proposal.

Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g., medical peer review.

Contents

Professional

Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform in decisions related to faculty advancement and tenure. [1] Henry Oldenburg (1619–1677) was a German-born British philosopher who is seen as the 'father' of modern scientific peer review. [2] [3] [4]

A prototype professional peer-review process was recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī (854–931). He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care. [5]

Professional peer review is common in the field of health care, where it is usually called clinical peer review . [6] Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc. [7] Many other professional fields have some level of peer review process: accounting, [8] law, [9] [10] engineering (e.g., software peer review, technical peer review), aviation, and even forest fire management. [11]

Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine. [12] [13]

Scholarly

Scholarly peer review (also known as refereeing) is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher (that is, the editor-in-chief, the editorial board or the program committee) decide whether the work should be accepted, considered acceptable with revisions, or rejected.

Peer review requires a community of experts in a given (and often narrowly defined) field, who are qualified and able to perform reasonably impartial review. Impartial review, especially of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish, and the significance (good or bad) of an idea may never be widely appreciated among its contemporaries. Peer review is generally considered necessary to academic quality and is used in most major scholarly journals. However, peer review does not prevent publication of invalid research, [14] and there is little evidence that peer review improves the quality of published papers. [15]

There are attempts to reform the peer review process, including from the fields of metascience and journalology. Reformers seek to increase the reliability and efficiency of the peer review process and to provide it with a scientific foundation. [16] [17] [18] Alternatives to common peer review practices have been put to the test, [19] [20] in particular open peer review, where the comments are visible to readers, generally with the identities of the peer reviewers disclosed as well, e.g., F1000, eLife, BMJ, and BioMed Central.

Government policy

The European Union has been using peer review in the "Open Method of Co-ordination" of policies in the fields of active labour market policy since 1999. [21] In 2004, a program of peer reviews started in social inclusion. [22] Each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating "peer countries" submit comments. The results are published on the web.

The United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies.

The State of California is the only U.S. state to mandate scientific peer review. In 1997, the Governor of California signed into law Senate Bill 1320 (Sher), Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004. [23]

Medical

Medical peer review may be distinguished in 4 classifications: [24]

1) Clinical peer review; Clinical peer review is a procedure for assessing a patient's involvement with experiences of care. It is a piece of progressing proficient practice assessment and centered proficient practice assessment—significant supporters of supplier credentialing and privileging. [25]

2) Peer evaluation of clinical teaching skills for both physicians and nurses; [26] [27]

3) Scientific peer review of journal articles;

4) A secondary round of peer review for the clinical value of articles concurrently published in medical journals. [28]

Additionally, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards. [29] [30] The clinical network believes it to be the most ideal method of guaranteeing that distributed exploration is dependable and that any clinical medicines that it advocates are protected and viable for individuals. Thus, the terminology has poor standardization and specificity, particularly as a database search term. [31]

Technical

In engineering, technical peer review is a type of engineering review. Technical peer reviews are a well defined review process for finding and fixing defects, conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products or completed portions of products. [32]

Criticism

To an outsider, the anonymous, pre-publication peer review process is opaque. Certain journals are accused of not carrying out stringent peer review in order to more easily expand their customer base, particularly in journals where authors pay a fee before publication. [33] Richard Smith, MD, former editor of the British Medical Journal , has claimed that peer review is "ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant; Several studies have shown that peer review is biased against the provincial and those from low- and middle-income countries; Many journals take months and even years to publish and the process wastes researchers' time. As for the cost, the Research Information Network estimated the global cost of peer review at £1.9 billion in 2008." [34]

In addition, Australia's Innovative Research Universities group (a coalition of seven comprehensive universities committed to inclusive excellence in teaching, learning and research in Australia) has found that "peer review disadvantages researchers in their early careers, when they rely on competitive grants to cover their salaries, and when unsuccessful funding applications often mark the end of a research idea". [35]

Low-end distinctions in articles understandable to all peers

John Ioannidis argues that since the exams and other tests that people pass on their way from "layman" to "expert" focus on answering the questions in time and in accordance with a list of answers, and not on making precise distinctions (the latter of which would be unrecognizable to experts of lower cognitive precision), there is as much individual variation in the ability to distinguish causation from correlation among "experts" as there is among "laymen". Ioannidis argues that as a result, scholarly peer review by many "experts" allows only articles that are understandable at a wide range of cognitive precision levels including very low ones to pass, biasing publications towards favoring articles that infer causation from correlation while mislabelling articles that make the distinction as "incompetent overestimation of one's ability" on the side of the authors because some of the reviewing "experts" are cognitively unable to distinguish the distinction from alleged rationalization of specific conclusions. It is argued by Ioannidis that this makes peer review a cause of selective publication of false research findings while stopping publication of rigorous criticism thereof, and that further post-publication review repeats the same bias by selectively retracting the few rigorous articles that may have made it through initial pre-publication peer review while letting the low-end ones that confuse correlation and causation remain in print. [36]

Peer review and trust

Researchers have peer reviewed manuscripts prior to publishing them in a variety of ways since the 18th century. [37] [38] The main goal of this practice is to improve the relevance and accuracy of scientific discussions. Even though experts often criticize peer review for a number of reasons, the process is still often considered the "gold standard" of science. [39] Occasionally however, peer review approves studies that are later found to be wrong and rarely deceptive or fraudulent results are discovered prior to publication. [40] [41] Thus, there seems to be an element of discord between the ideology behind and the practice of peer review. By failing to effectively communicate that peer review is imperfect, the message conveyed to the wider public is that studies published in peer-reviewed journals are "true" and that peer review protects the literature from flawed science. A number of well-established criticisms exist of many elements of peer review. [42] [43] [44] In the following we describe cases of the wider impact inappropriate peer review can have on public understanding of scientific literature.

Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted. For example, climate change deniers have published studies in the Energy and Environment journal, attempting to undermine the body of research that shows how human activity impacts the Earth's climate. Politicians in the United States who reject the established science of climate change have then cited this journal on several occasions in speeches and reports. [note 1]

At times, peer review has been exposed as a process that was orchestrated for a preconceived outcome. The New York Times gained access to confidential peer review documents for studies sponsored by the National Football League (NFL) that were cited as scientific evidence that brain injuries do not cause long-term harm to its players. [note 2] During the peer review process, the authors of the study stated that all NFL players were part of a study, a claim that the reporters found to be false by examining the database used for the research. Furthermore, The Times noted that the NFL sought to legitimize the studies" methods and conclusion by citing a "rigorous, confidential peer-review process" despite evidence that some peer reviewers seemed "desperate" to stop their publication. Recent research has also demonstrated that widespread industry funding for published medical research often goes undeclared and that such conflicts of interest are not appropriately addressed by peer review. [45] [46]

Another problem that peer review fails to catch is ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes. [47] These studies can then be used for political, regulatory and marketing purposes. In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates. [note 3] Ghostwritten articles have appeared in dozens of journals, involving professors at several universities. [note 4]

Just as experts in a particular field have a better understanding of the value of papers published in their area, scientists are considered to have better grasp of the value of published papers than the general public and to see peer review as a human process, with human failings, [48] and that "despite its limitations, we need it. It is all we have, and it is hard to imagine how we would get along without it". [49] But these subtleties are lost on the general public, who are often misled into thinking that published in a journal with peer review is the "gold standard" and can erroneously equate published research with the truth. [48] Thus, more care must be taken over how peer review, and the results of peer reviewed research, are communicated to non-specialist audiences; particularly during a time in which a range of technical changes and a deeper appreciation of the complexities of peer review are emerging. [50] [51] [52] [53] This will be needed as the scholarly publishing system has to confront wider issues such as retractions [41] [54] [55] and replication or reproducibility "crisis'. [56] [57] [58]

Views of peer review

Peer review is often considered integral to scientific discourse in one form or another. Its gatekeeping role is supposed to be necessary to maintain the quality of the scientific literature [59] [60] and avoid a risk of unreliable results, inability to separate signal from noise, and slow scientific progress. [61] [62]

Shortcomings of peer review have been met with calls for even stronger filtering and more gatekeeping. A common argument in favor of such initiatives is the belief that this filter is needed to maintain the integrity of the scientific literature. [63] [64]

Calls for more oversight have at least two implications that are counterintuitive of what is known to be true scholarship. [48]

  1. The belief that scholars are incapable of evaluating the quality of work on their own, that they are in need of a gatekeeper to inform them of what is good and what is not.
  2. The belief that scholars need a "guardian" to make sure they are doing good work.

Others argue [48] that authors most of all have a vested interest in the quality of a particular piece of work. Only the authors could have, as Feynman (1974) [note 5] puts it, the "extra type of integrity that is beyond not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist." If anything, the current peer review process and academic system could penalize, or at least fail to incentivize, such integrity.

Instead, the credibility conferred by the "peer-reviewed" label could diminish what Feynman calls the culture of doubt necessary for science to operate a self-correcting, truth-seeking process. [65] The effects of this can be seen in the ongoing replication crisis, hoaxes, and widespread outrage over the inefficacy of the current system. [42] [37] It's common to think that more oversight is the answer, as peer reviewers are not at all lacking in skepticism. But the issue is not the skepticism shared by the select few who determine whether an article passes through the filter. It is the validation, and accompanying lack of skepticism, that comes afterwards. [note 6] Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry. [note 7]

Quality research - even some of our most fundamental scientific discoveries - dates back centuries, long before peer review took its current form. [37] [66] [38] Whatever peer review existed centuries ago, it took a different form than it does in modern times, without the influence of large, commercial publishing companies or a pervasive culture of publish or perish. [66] Though in its initial conception it was often a laborious and time-consuming task, researchers took peer review on nonetheless, not out of obligation but out of duty to uphold the integrity of their own scholarship. They managed to do so, for the most part, without the aid of centralised journals, editors, or any formalised or institutionalised process whatsoever. Supporters of modern technology argue [48] that it makes it possible to communicate instantaneously with scholars around the globe, make such scholarly exchanges easier, and restore peer review to a purer scholarly form, as a discourse in which researchers engage with one another to better clarify, understand, and communicate their insights. [51] [67]

Such modern technology includes posting results to preprint servers, preregistration of studies, open peer review, and other open science practices. [57] [68] [69] In all these initiatives, the role of gatekeeping remains prominent, as if a necessary feature of all scholarly communication, but critics argue [44] that a proper, real-world implementation could test and disprove this assumption; demonstrate researchers' desire for more that traditional journals can offer; show that researchers can be entrusted to perform their own quality control independent of journal-coupled review. Jon Tennant also argues that the outcry over the inefficiencies of traditional journals centers on their inability to provide rigorous enough scrutiny, and the outsourcing of critical thinking to a concealed and poorly-understood process. Thus, the assumption that journals and peer review are required to protect scientific integrity seems to undermine the very foundations of scholarly inquiry. [48]

To test the hypothesis that filtering is indeed unnecessary to quality control, many of the traditional publication practices would need to be redesigned, editorial boards repurposed if not disbanded, and authors granted control over the peer review of their own work. Putting authors in charge of their own peer review is seen as serving a dual purpose. [48] On one hand, it removes the conferral of quality within the traditional system, thus eliminating the prestige associated with the simple act of publishing. Perhaps paradoxically, the removal of this barrier might actually result in an increase of the quality of published work, as it eliminates the cachet of publishing for its own sake. On the other hand, readers know that there is no filter so they must interpret anything they read with a healthy dose of skepticism, thereby naturally restoring the culture of doubt to scientific practice. [70] [71] [72]

In addition to concerns about the quality of work produced by well-meaning researchers, there are concerns that a truly open system would allow the literature to be populated with junk and propaganda by those with a vested interest in certain issues. A counterargument is that the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature. [48] Allowing such "junk" to be published could make individual articles less reliable but render the overall literature more robust by fostering a "culture of doubt". [70]

One initiative experimenting in this area is Researchers.One, a non-profit peer review publication platform featuring a novel author-driven peer review process. [73] Other similar examples include the Self-Journal of Science, PRElights, and The Winnower, which do not yet seem to have greatly disrupted the traditional peer review workflow. Supporters conclude that researchers are more than responsible and competent enough to ensure their own quality control; they just need the means and the authority to do so. [48]

See also

Notes

  1. "Skeptics get a journal" (PDF)., Paul Thacker, 2005.
  2. "N.F.L.'s Flawed Concussion Research and Ties to Tobacco Industry"..
  3. "Ghostwriting in medical literature" (PDF)..
  4. "Frequently asked questions about medical ghostwriting"..
  5. "Cargo cult science"., Richard Feynman.
  6. "Peer Review: The Worst Way to Judge Research, Except for All the Others"., Aaron E. Carroll, New York Times.
  7. "Bucking the Big Bang"., Eric Lerner, New Scientist.

Related Research Articles

Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.

Academic publishing Subfield of publishing which distributes academic research and scholarship

Academic publishing is the subfield of publishing which distributes academic research and scholarship. Most academic work is published in academic journal article, book or thesis form. The part of academic written output that is not formally published but merely printed up or posted on the Internet is often called "grey literature". Most scientific and scholarly journals, and many academic and scholarly books, though not all, are based on some form of peer review or editorial refereeing to qualify texts for publication. Peer review quality and selectivity standards vary greatly from journal to journal, publisher to publisher, and field to field.

Academic journal

An academic or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. Academic journals serve as permanent and transparent forums for the presentation, scrutiny, and discussion of research. They are usually peer-reviewed or refereed. Content typically takes the form of articles presenting original research, review articles, and book reviews. The purpose of an academic journal, according to Henry Oldenburg, is to give researchers a venue to "impart their knowledge to one another, and contribute what they can to the Grand design of improving natural knowledge, and perfecting all Philosophical Arts, and Sciences."

Open access Research publications that are distributed online, free of cost or other access barriers

Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of cost or other access barriers. With open access strictly defined, or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright.

Scientific literature comprises scholarly publications that report original empirical and theoretical work in the natural and social sciences, and within an academic field, often abbreviated as the literature. Academic publishing is the process of contributing the results of one's research into the literature, which often requires a peer-review process.

Publication bias is a type of bias that occurs in published academic research. It occurs when the outcome of an experiment or research study influences the decision whether to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings, and inserts bias in favor of positive results. The study of publication bias is an important topic in metascience.

A public health journal is a scientific journal devoted to the field of public health, including epidemiology, biostatistics, and health care. Public health journals, like most scientific journals, are peer-reviewed. Public health journals are commonly published by health organizations and societies, such as the Bulletin of the World Health Organization or the Journal of Epidemiology and Community Health. Many others are published by a handful of large publishing corporations that includes Elsevier, Wolters Kluwer, Wiley-Blackwell, Springer Science+Business Media, and Informa, each of which has many imprints. Many societies partner with such corporations to handle the work of producing their journals.

Scientometrics is the field of study which concerns itself with measuring and analysing scientific literature. Scientometrics is a sub-field of bibliometrics. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience. Critics have argued that over-reliance on scientometrics has created a system of perverse incentives, producing a publish or perish environment that leads to low quality research.

<i>PLOS Medicine</i> Academic journal

PLOS Medicine is a peer-reviewed weekly medical journal covering the full spectrum of the medical sciences. It began operation on October 19, 2004, as the second journal of the Public Library of Science (PLOS), a non-profit open access publisher. All content in PLOS Medicine is published under the Creative Commons "by-attribution" license. To fund the journal, the publication's business model requires in most cases that authors pay publication fees. The journal was published online and in a printed format until 2005 and is now only published online. The journal's acting chief editor is Clare Stone, who replaced the previous chief editor, Larry Peiperl, in 2018.

Medical literature

Medical literature is the scientific literature of medicine: articles in journals and texts in books devoted to the field of medicine. Many references to the medical literature include the health care literature generally, including that of dentistry, veterinary medicine, pharmacy, nursing, and the allied health professions.

<i>PLOS One</i> Academic journal

PLOS One is a peer-reviewed open access scientific journal published by the Public Library of Science (PLOS) since 2006. The journal covers primary research from any discipline within science and medicine. The Public Library of Science began in 2000 with an online petition initiative by Nobel Prize winner Harold Varmus, formerly director of the National Institutes of Health and at that time director of Memorial Sloan–Kettering Cancer Center; Patrick O. Brown, a biochemist at Stanford University; and Michael Eisen, a computational biologist at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory.

A review article is an article that summarizes the current state of understanding on a topic. A review article surveys and summarizes previously published studies, rather than reporting new facts or analysis. Review articles are sometimes also called survey articles or, in news publishing, overview articles. Academic publications that specialize in review articles are known as review journals.

Scholarly peer review is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher decide whether the work should be accepted, considered acceptable with revisions, or rejected.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PRISMA is an evidence-based minimum set of items aimed at helping authors to report a wide array of systematic reviews and meta-analyses that assess the benefits and harms of a health care intervention. PRISMA focuses on ways in which authors can ensure a transparent and complete reporting of this type of research. The PRISMA standard supersedes the QUOROM standard.

Predatory publishing Fraudulent business model for scientific publications

Predatory publishing, sometimes called write-only publishing or deceptive publishing, is an exploitive academic publishing business model that involves charging publication fees to authors without checking articles for quality and legitimacy and without providing the other editorial and publishing services that legitimate academic journals provide, whether open access or not. They are regarded as predatory because scholars are tricked into publishing with them, although some authors may be aware that the journal is poor quality or even fraudulent. New scholars from developing countries are said to be especially at risk of being misled by predatory publishers. According to one study, 60% of articles published in predatory journals receive no citations over the five-year period following publication.

Medicine is an open access peer-reviewed medical journal published by Lippincott Williams & Wilkins (LWW), an imprint of Wolters Kluwer. It was established in 1922. Of general medical journals still in publication since 1959, Medicine had the highest number of citations per paper between 1959 and 2009. The journal covers all aspects of clinical medicine and publishes in over 43 specialty subjects.

Replication crisis Ongoing methodological crisis in science stemming from failure to replicate many studies

The replication crisis is, as of 2020, an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce. The replication crisis affects the social sciences and medicine most severely. The crisis has long-standing roots; the phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in the field of metascience.

Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science." In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."

Conflicts of interest in academic publishing

Conflicts of interest (COIs) often arise in academic publishing. Such conflicts may cause wrongdoing and make it more likely. Ethical standards in academic publishing exist to avoid and deal with conflicts of interest, and the field continues to develop new standards. Standards vary between journals and are unevenly applied. According to the International Committee of Medical Journal Editors, "[a]uthors have a responsibility to evaluate the integrity, history, practices and reputation of the journals to which they submit manuscripts".

Journalology is the scholarly study of all aspects of the academic publishing process. The field seeks to improve the quality of scholarly research by implementing evidence-based practices in academic publishing. The term "journalology" was coined by Stephen Lock, the former editor-in-chief of the BMJ. The first Peer Review Congress, held in 1989 in Chicago, Illinois, is considered a pivotal moment in the founding of journalology as a distinct field. The field of journalology has been influential in pushing for study pre-registration in science, particularly in clinical trials. Clinical trial registration is now expected in most countries. Journalology researchers also work to reform the peer review process.

References

  1. Schimanski, Lesley A.; Alperin, Juan Pablo (2018). "The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future". F1000Research. 7: 1605. doi:10.12688/f1000research.16493.1. ISSN   2046-1402. PMC   6325612 . PMID   30647909.
  2. Hatch, Robert A. (February 1998). "The Scientific Revolution: Correspondence Networks". University of Florida . Retrieved August 21, 2016.
  3. Oldenburg, Henry (1665). "Epistle Dedicatory". Philosophical Transactions of the Royal Society . 1: 0. doi:10.1098/rstl.1665.0001. S2CID   186211404.
  4. Hall, Marie Boas (2002). Henry Oldenburg: shaping the Royal Society. Oxford: Oxford University Press. Bibcode:2002heol.book.....B. ISBN   978-0-19-851053-6.CS1 maint: ref=harv (link)
  5. Spier, Ray (2002). "The history of the peer-review process". Trends in Biotechnology. 20 (8): 357–8. doi:10.1016/S0167-7799(02)01985-6. PMID   12127284.
  6. Dans, PE (1993). "Clinical peer review: burnishing a tarnished image". Annals of Internal Medicine. 118 (7): 566–8. doi:10.7326/0003-4819-118-7-199304010-00014. PMID   8442628. S2CID   45863865. Archived from the original on July 21, 2012.
  7. Milgrom P, Weinstein P, Ratener P, Read WA, Morrison K; Weinstein; Ratener; Read; Morrison (1978). "Dental Examinations for Quality Control: Peer Review versus Self-Assessment". American Journal of Public Health. 68 (4): 394–401. doi:10.2105/AJPH.68.4.394. PMC   1653950 . PMID   645987.CS1 maint: multiple names: authors list (link)
  8. "AICPA Peer Review Program Manual". American Institute of CPAs.
  9. "Peer Review". UK Legal Services Commission. July 12, 2007. Archived from the original on October 14, 2010.
  10. "Martindale-Hubbell Attorney Reviews and Ratings". Martindale. Retrieved January 27, 2020.
  11. "Peer Review Panels – Purpose and Process" (PDF). USDA Forest Service. February 6, 2006. Retrieved October 4, 2010.
  12. Sims Gerald K. (1989). "Student Peer Review in the Classroom: A Teaching and Grading Tool" (PDF). Journal of Agronomic Education . 18 (2): 105–108. doi:10.2134/jae1989.0105. The review process was double-blind to provide anonymity for both authors and reviewers, but was otherwise handled in a fashion similar to that used by scientific journals
  13. Liu, Jianguo; Pysarchik, Dawn Thorndike; Taylor, William W. (2002). "Peer Review in the Classroom" (PDF). BioScience. 52 (9): 824–829. doi:10.1641/0006-3568(2002)052[0824:PRITC]2.0.CO;2.
  14. KupferschmidtAug. 17, Kai; 2018; Am, 9:15 (August 14, 2018). "Researcher at the center of an epic fraud remains an enigma to those who exposed him". Science | AAAS. Retrieved August 11, 2019.CS1 maint: numeric names: authors list (link)
  15. Couzin-Frankel J (September 2013). "Biomedical publishing. Secretive and subjective, peer review proves resistant to study". Science. 341 (6152): 1331. doi:10.1126/science.341.6152.1331. PMID   24052283.
  16. Rennie, Drummond (July 7, 2016). "Let's make peer review scientific". Nature News. 535 (7610): 31–33. Bibcode:2016Natur.535...31R. doi:10.1038/535031a. PMID   27383970. S2CID   4408375.
  17. Slavov, Nikolai (November 11, 2015). "Making the most of peer review". eLife. 4: e12708. doi:10.7554/eLife.12708. ISSN   2050-084X. PMC   4641509 . PMID   26559758.
  18. Couzin-FrankelSep. 19, Jennifer (September 18, 2018). "'Journalologists' use scientific methods to study academic publishing. Is their work improving science?". Science | AAAS. Retrieved July 18, 2019.
  19. Cosgrove, Andrew; Cheifet, Barbara (November 27, 2018). "Transparent peer review trial: the results". Genome Biology. 19 (1): 206. doi:10.1186/s13059-018-1584-0. ISSN   1474-760X. PMC   6260718 . PMID   30482224.
  20. Patterson, Mark; Schekman, Randy (June 26, 2018). "A new twist on peer review". eLife. 7: e36545. doi:10.7554/eLife.36545. ISSN   2050-084X. PMC   6019064 . PMID   29944117.
  21. "Mutual Learning Programme - Employment, Social Affairs & Inclusion - European Commission". ec.europa.eu.
  22. "Social Peer to Peer – Online Casino Reviews". www.peer-review-social-inclusion.eu.
  23. "What is Scientific Peer Review?". ceparev.berkeley.edu. Retrieved March 30, 2017.
  24. "REVIEW BY PEERS" (PDF). A Guide for Professional, Clinical and Administrative Processes.
  25. Deyo-Svendsen, Mark E.; Phillips, Michael R.; Albright, Jill K.; Schilling, Keith A.; Palmer, Karl B. (October/December 2016). "A Systematic Approach to Clinical Peer Review in a Critical Access Hospital". Quality Management in Healthcare. 25 (4): 213–218. doi:10.1097/QMH.0000000000000113. ISSN   1063-8628. PMC   5054974 . PMID   27749718.Check date values in: |date= (help)
  26. Medschool.ucsf.edu Archived August 14, 2010, at the Wayback Machine
  27. Ludwick R, Dieckman BC, Herdtner S, Dugan M, Roche M (November–December 1998). "Documenting the scholarship of clinical teaching through peer review". Nurse Educator. 23 (6): 17–20. doi:10.1097/00006223-199811000-00008. PMID   9934106.
  28. Haynes RB, Cotoi C, Holland J, et al. (2006). "Second-order peer review of the medical literature for clinical practitioners". JAMA. 295 (15): 1801–8. doi: 10.1001/jama.295.15.1801 . PMID   16622142.
  29. Snelson, Elizabeth A. (2010). Physician's Guide to Medical Staff Organization Bylaws (PDF). ama-assn.org. p. 131. Archived from the original (PDF) on August 6, 2011.
  30. "Medical Peer Review". Ama-assn.org. Archived from the original on March 6, 2010.
  31. "Peer review: What is it and why do we do it?". www.medicalnewstoday.com. March 29, 2019. Retrieved August 6, 2020.
  32. NASA Systems Engineering Handbook (PDF). NASA. 2007. SP-610S.
  33. Couchman, John R. (November 11, 2013). "Peer Review and Reproducibility. Crisis or Time for Course Correction?". Journal of Histochemistry & Cytochemistry. 62 (1): 9–10. doi:10.1369/0022155413513462. PMC   3873808 . PMID   24217925.
  34. "The peer review drugs don't work". Times Higher Education (THE). May 28, 2015. Retrieved October 23, 2018.
  35. "Peer review 'works against' early career researchers". Times Higher Education (THE). July 16, 2018. Retrieved October 23, 2018.
  36. JPA Ioannidis (2005) "Why Most Published Research Findings Are False"
  37. 1 2 3 Csiszar, Alex (2016). "Peer Review: Troubled from the Start". Nature. 532 (7599): 306–308. Bibcode:2016Natur.532..306C. doi: 10.1038/532306a . PMID   27111616.
  38. 1 2 Moxham, Noah; Fyfe, Aileen (2018). "The Royal Society and the Prehistory of Peer Review, 1665–1965" (PDF). The Historical Journal. 61 (4): 863–889. doi:10.1017/S0018246X17000334.
  39. Moore, John (2006). "Does Peer Review Mean the Same to the Public as It Does to Scientists?". Nature. doi:10.1038/nature05009.
  40. Ferguson, Cat; Marcus, Adam; Oransky, Ivan (2014). "Publishing: The Peer-Review Scam". Nature. 515 (7528): 480–482. Bibcode:2014Natur.515..480F. doi: 10.1038/515480a . PMID   25428481.
  41. 1 2 Budd, J. M.; Sievert, M.; Schultz, T. R. (1998). "Phenomena of Retraction: Reasons for Retraction and Citations to the Publications". JAMA. 280 (3): 296–7. doi: 10.1001/jama.280.3.296 . PMID   9676689.
  42. 1 2 Smith, Richard (2006). "Peer Review: A Flawed Process at the Heart of Science and Journals". Journal of the Royal Society of Medicine. 99 (4): 178–82. doi:10.1177/014107680609900414. PMC   1420798 . PMID   16574968.
  43. Ross-Hellauer, Tony (2017). "What Is Open Peer Review? A Systematic Review". F1000Research. 6: 588. doi:10.12688/f1000research.11369.2. PMC   5437951 . PMID   28580134.
  44. 1 2 Tennant, Jonathan P.; Dugan, Jonathan M.; Graziotin, Daniel; Jacques, Damien C.; Waldner, François; Mietchen, Daniel; Elkhatib, Yehia; b. Collister, Lauren; Pikas, Christina K.; Crick, Tom; Masuzzo, Paola; Caravaggi, Anthony; Berg, Devin R.; Niemeyer, Kyle E.; Ross-Hellauer, Tony; Mannheimer, Sara; Rigling, Lillian; Katz, Daniel S.; Greshake Tzovaras, Bastian; Pacheco-Mendoza, Josmel; Fatima, Nazeefa; Poblet, Marta; Isaakidis, Marios; Irawan, Dasapta Erwin; Renaut, Sébastien; Madan, Christopher R.; Matthias, Lisa; Nørgaard Kjær, Jesper; O'Donnell, Daniel Paul; et al. (2017). "A Multi-Disciplinary Perspective on Emergent and Future Innovations in Peer Review". F1000Research. 6: 1151. doi:10.12688/f1000research.12037.3. PMC   5686505 . PMID   29188015.
  45. Wong, Victoria S. S.; Avalos, Lauro Nathaniel; Callaham, Michael L. (2019). "Industry Payments to Physician Journal Editors". PLOS ONE. 14 (2): e0211495. Bibcode:2019PLoSO..1411495W. doi:10.1371/journal.pone.0211495. PMC   6366761 . PMID   30730904.
  46. Weiss, Glen J.; Davis, Roger B. (2019). "Discordant Financial Conflicts of Interest Disclosures between Clinical Trial Conference Abstract and Subsequent Publication". PeerJ. 7: e6423. doi:10.7717/peerj.6423. PMC   6375255 . PMID   30775185.
  47. Flaherty, D. K. (2013). "Ghost- and Guest-Authored Pharmaceutical Industry–Sponsored Studies: Abuse of Academic Integrity, the Peer Review System, and Public Trust". The Annals of Pharmacotherapy. 47 (7–8): 1081–3. doi:10.1345/aph.1R691. PMID   23585648. S2CID   22513775.
  48. 1 2 3 4 5 6 7 8 9 Vanholsbeeck, Marc; Thacker, Paul; Sattler, Susanne; Ross-Hellauer, Tony; Rivera-López, Bárbara S.; Rice, Curt; Nobes, Andy; Masuzzo, Paola; Martin, Ryan; Kramer, Bianca; Havemann, Johanna; Enkhbayar, Asura; Davila, Jacinto; Crick, Tom; Crane, Harry; Tennant, Jonathan P. (March 11, 2019). "Ten Hot Topics around Scholarly Publishing". Publications. 7 (2): 34. doi: 10.3390/publications7020034 .
  49. Relman, A. S. (1990). "Peer Review in Scientific Journals--What Good Is It?". Western Journal of Medicine. 153 (5): 520–22. PMC   1002603 . PMID   2260288.
  50. Bravo, Giangiacomo; Grimaldo, Francisco; López-Iñesta, Emilia; Mehmani, Bahar; Squazzoni, Flaminio (2019). "The Effect of Publishing Peer Review Reports on Referee Behavior in Five Scholarly Journals". Nature Communications. 10 (1): 322. Bibcode:2019NatCo..10..322B. doi:10.1038/s41467-018-08250-2. PMC   6338763 . PMID   30659186.
  51. 1 2 Tennant, Jonathan P. (2018). "The State of the Art in Peer Review". FEMS Microbiology Letters. 365 (19). doi:10.1093/femsle/fny204. PMC   6140953 . PMID   30137294.
  52. Squazzoni, Flaminio; Grimaldo, Francisco; Marušić, Ana (2017). "Publishing: Journals Could Share Peer-Review Data". Nature. 546 (7658): 352. Bibcode:2017Natur.546Q.352S. doi:10.1038/546352a. PMID   28617464. S2CID   52858966.
  53. Allen, Heidi; Boxer, Emma; Cury, Alexandra; Gaston, Thomas; Graf, Chris; Hogan, Ben; Loh, Stephanie; Wakley, Hannah; Willis, Michael (2018). "What Does Better Peer Review Look like? Definitions, Essential Areas, and Recommendations for Better Practice". doi:10.17605/OSF.IO/4MFK2.Cite journal requires |journal= (help)
  54. Fang, Ferric C.; Casadevall, Arturo (2011). "Retracted Science and the Retraction Index". Infection and Immunity. 79 (10): 3855–3859. doi:10.1128/IAI.05661-11. PMC   3187237 . PMID   21825063.
  55. Moylan, Elizabeth C.; Kowalczuk, Maria K. (2016). "Why Articles Are Retracted: A Retrospective Cross-Sectional Study of Retraction Notices at BioMed Central". BMJ Open. 6 (11): e012047. doi:10.1136/bmjopen-2016-012047. PMC   5168538 . PMID   27881524.
  56. Open Science Collaboration (2015). "Estimating the Reproducibility of Psychological Science". Science. 349 (6251): aac4716. doi:10.1126/science.aac4716. hdl: 10722/230596 . PMID   26315443. S2CID   218065162.
  57. 1 2 Munafò, Marcus R.; Nosek, Brian A.; Bishop, Dorothy V. M.; Button, Katherine S.; Chambers, Christopher D.; Percie Du Sert, Nathalie; Simonsohn, Uri; Wagenmakers, Eric-Jan; Ware, Jennifer J.; Ioannidis, John P. A. (2017). "A Manifesto for Reproducible Science". Nature Human Behaviour. 1. doi: 10.1038/s41562-016-0021 .
  58. Fanelli, Daniele (2018). "Opinion: Is Science Really Facing a Reproducibility Crisis, and Do We Need It To?". Proceedings of the National Academy of Sciences. 115 (11): 2628–2631. doi:10.1073/pnas.1708272114. PMC   5856498 . PMID   29531051.
  59. Goodman, Steven N. (1994). "Manuscript Quality before and after Peer Review and Editing at Annals of Internal Medicine". Annals of Internal Medicine. 121 (1): 11–21. doi:10.7326/0003-4819-121-1-199407010-00003. PMID   8198342. S2CID   5716602.
  60. Pierson, Charon A. (2018). "Peer review and journal quality". Journal of the American Association of Nurse Practitioners. 30 (1): 1–2. doi:10.1097/JXX.0000000000000018. PMID   29757914.
  61. Caputo, Richard K. (2019). "Peer Review: A Vital Gatekeeping Function and Obligation of Professional Scholarly Practice". Families in Society: The Journal of Contemporary Social Services. 100: 6–16. doi: 10.1177/1044389418808155 .
  62. Siler, Kyle; Lee, Kirby; Bero, Lisa (2015). "Measuring the effectiveness of scientific gatekeeping". Proceedings of the National Academy of Sciences. 112 (2): 360–365. Bibcode:2015PNAS..112..360S. doi:10.1073/pnas.1418218112. PMC   4299220 . PMID   25535380.
  63. Resnik, David B.; Elmore, Susan A. (2016). "Ensuring the Quality, Fairness, and Integrity of Journal Peer Review: A Possible Role of Editors". Science and Engineering Ethics. 22 (1): 169–188. doi:10.1007/s11948-015-9625-5. PMID   25633924. S2CID   3641934.
  64. Bornmann, Lutz (2011). "Scientific Peer Review". Annual Review of Information Science and Technology. 45: 197–245. doi:10.1002/aris.2011.1440450112.
  65. "Cargo Cult Science". Caltech Magazine. 1974. Archived from the original on August 24, 2019.
  66. 1 2 "Untangling Academic Publishing. A History of the Relationship between Commercial Interests, Academic Prestige and the Circulation of Research". 26.
  67. Priem, Jason; Hemminger, Bradley M. (2012). "Decoupling the Scholarly Journal". Frontiers in Computational Neuroscience. 6: 19. doi:10.3389/fncom.2012.00019. PMC   3319915 . PMID   22493574.
  68. Bowman, Nicholas David; Keene, Justin Robert (2018). "A Layered Framework for Considering Open Science Practices". Communication Research Reports. 35 (4): 363–372. doi: 10.1080/08824096.2018.1513273 .
  69. McKiernan, E. C.; Bourne, P. E.; Brown, C. T.; Buck, S.; Kenall, A.; Lin, J.; McDougall, D.; Nosek, B. A.; Ram, K.; Soderberg, C. K.; Spies, J. R.; Thaney, K.; Updegrove, A.; Woo, K. H.; Yarkoni, T. (2016). "Point of View: How Open Science Helps Researchers Succeed". eLife. 5. doi:10.7554/eLife.16800. PMC   4973366 . PMID   27387362.
  70. 1 2 "In Peer Review We (Don't) Trust: How Peer Review's Filtering Poses a Systemic Risk to Science".
  71. Brembs, Björn (2019). "Reliable Novelty: New Should Not Trump True". PLOS Biology. 17 (2): e3000117. doi:10.1371/journal.pbio.3000117. PMC   6372144 . PMID   30753184.
  72. Stern, Bodo M.; o'Shea, Erin K. (2019). "A Proposal for the Future of Scientific Publishing in the Life Sciences". PLOS Biology. 17 (2): e3000116. doi:10.1371/journal.pbio.3000116. PMC   6372143 . PMID   30753179.
  73. "The RESEARCHERS.ONE Mission".

Further reading