Reproducibility Project

Last updated

The Reproducibility Project is a series of crowdsourced collaborations aiming to reproduce published scientific studies, finding high rates of results which could not be replicated. It has resulted in two major initiatives focusing on the fields of psychology [1] and cancer biology. [2] The project has brought attention to the replication crisis, and has contributed to shifts in scientific culture and publishing practices to address it. [3]

Contents

The project was led by the Center for Open Science and its co-founder, Brian Nosek, who started the project in November 2011. [4]

Results

Brian Nosek of University of Virginia and colleagues sought out to replicate 100 different studies that all were published in 2008. [5] The project pulled these studies from three different journals, Psychological Science , the Journal of Personality and Social Psychology , and the Journal of Experimental Psychology: Learning, Memory, and Cognition , published in 2008 to see if they could get the same results as the initial findings. In their initial publications 97 of these 100 studces these effects were replicated, they were often smaller than those in the original papers. The authors emphasized that the findings reflect a problem that affects all of science and not just psychology, and that there is room to improve reproducibility in psychology.

In 2021, the project showed that of 193 experiments from 53 top papers about cancer published between 2010 and 2012, only 50 experiments from 23 papers could get replicated. Moreover, it showed that the effect sizes of that fraction were 85% smaller on average than the original findings. None of the papers had its experimental protocols fully described and 70% of experiments required asking for key reagents. [6] [7]

Impact

The project, along with broader action in response to the replication crisis, has helped spur changes in scientific culture and publishing practices. [3] [4] The results of the Reproducibility Project might also affect public trust in psychology. [8] [9] Lay people who learned about the low replication rate found in the Reproducibility Project subsequently reported a lower trust in psychology, compared to people who were told that a high number of the studies had replicated. [10] [8]

See also

Related Research Articles

Psychology is the study of mind and behavior in humans and non-humans. Psychology includes the study of conscious and unconscious phenomena, including feelings and thoughts. It is an academic discipline of immense scope, crossing the boundaries between the natural and social sciences. Psychologists seek an understanding of the emergent properties of brains, linking the discipline to neuroscience. As social scientists, psychologists aim to understand the behavior of individuals and groups.

Social psychology is the scientific study of how thoughts, feelings, and behaviors are influenced by the real or imagined presence of other people or by social norms. Social psychologists typically explain human behavior as a result of the relationship between mental states and social situations, studying the social conditions under which thoughts, feelings, and behaviors occur, and how these variables influence social interactions.

Reproducibility, closely related to replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a statistical analysis of a data set should be achieved again with a high degree of reliability when the study is replicated. There are different kinds of replication but typically replication studies involve different researchers using the same methodology. Only after one or several such successful replications should a result be recognized as scientific knowledge.

<span class="mw-page-title-main">Stanford prison experiment</span> Controversial 1971 psychological experiment

The Stanford prison experiment (SPE) was a psychological experiment conducted in August 1971. It was a two-week simulation of a prison environment that examined the effects of situational variables on participants' reactions and behaviors. Stanford University psychology professor Philip Zimbardo led the research team who administered the study.

<span class="mw-page-title-main">Daryl Bem</span> American psychologist (born 1938)

Daryl J. Bem is a social psychologist and professor emeritus at Cornell University. He is the originator of the self-perception theory of attitude formation and change. He has also researched psi phenomena, group decision making, handwriting analysis, sexual orientation, and personality theory and assessment.

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.

Robert Bolesław Zajonc was a Polish-born American social psychologist who is known for his decades of work on a wide range of social and cognitive processes. One of his most important contributions to social psychology is the mere-exposure effect. Zajonc also conducted research in the areas of social facilitation, and theories of emotion, such as the affective neuroscience hypothesis. He also made contributions to comparative psychology. He argued that studying the social behavior of humans alongside the behavior of other species, is essential to our understanding of the general laws of social behavior. An example of his viewpoint is his work with cockroaches that demonstrated social facilitation, evidence that this phenomenon is displayed regardless of species. A Review of General Psychology survey, published in 2002, ranked Zajonc as the 35th most cited psychologist of the 20th century. He died of pancreatic cancer on December 3, 2008 in Palo Alto, California.

<i>Evolution and Human Behavior</i> Academic journal

Evolution and Human Behavior is a bimonthly peer-reviewed academic journal covering research in which evolutionary perspectives are brought to bear on the study of human behavior, ranging from evolutionary psychology to evolutionary anthropology and cultural evolution. It is primarily a scientific journal, but articles from scholars in the humanities are also published. Papers reporting on theoretical and empirical work on other species may be included if their relevance to the human animal is apparent. The journal was established in 1980, and beginning with Volume 18 in 1997 has been published by Elsevier on behalf of the Human Behavior and Evolution Society. The editor-in-chief is Debra Lieberman.

<i>Journal of Personality and Social Psychology</i> Academic journal

The Journal of Personality and Social Psychology is a monthly peer-reviewed scientific journal published by the American Psychological Association that was established in 1965. It covers the fields of social and personality psychology. The editors-in-chief are Shinobu Kitayama, Colin Wayne Leach, and Richard E. Lucas.

<span class="mw-page-title-main">Why Most Published Research Findings Are False</span> 2005 essay written by John Ioannidis

"Why Most Published Research Findings Are False" is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. It is considered foundational to the field of metascience.

The decline effect may occur when scientific claims receive decreasing support over time. The term was first described by parapsychologist Joseph Banks Rhine in the 1930s to describe the disappearing of extrasensory perception (ESP) of psychic experiments conducted by Rhine over the course of study or time. In its more general term, Cronbach, in his review article of science "Beyond the two disciplines of scientific psychology" referred to the phenomenon as "generalizations decay." The term was once again used in a 2010 article by Jonah Lehrer published in The New Yorker.

<span class="mw-page-title-main">Amy Cuddy</span> American psychologist

Amy Joy Casselberry Cuddy is an American social psychologist, author and speaker. She is a proponent of "power posing", a self-improvement technique whose scientific validity has been questioned. She has served as a faculty member at Rutgers University, Kellogg School of Management and Harvard Business School. Cuddy's most cited academic work involves using the stereotype content model that she helped develop to better understand the way people think about stereotyped people and groups. Though Cuddy left her tenure-track position at Harvard Business School in the spring of 2017, she continues to contribute to its executive education programs.

<span class="mw-page-title-main">Center for Open Science</span> American nonprofit organization

The Center for Open Science is a non-profit technology organization based in Charlottesville, Virginia with a mission to "increase the openness, integrity, and reproducibility of scientific research." Brian Nosek and Jeffrey Spies founded the organization in January 2013, funded mainly by the Laura and John Arnold Foundation and others.

Invalid science consists of scientific claims based on experiments that cannot be reproduced or that are contradicted by experiments that can be reproduced. Recent analyses indicate that the proportion of retracted claims in the scientific literature is steadily increasing. The number of retractions has grown tenfold over the past decade, but they still make up approximately 0.2% of the 1.4m papers published annually in scholarly journals.

<span class="mw-page-title-main">Replication crisis</span> Observed inability to reproduce scientific studies

The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.

Brian Arthur Nosek is an American social-cognitive psychologist, professor of psychology at the University of Virginia, and the co-founder and director of the Center for Open Science. He also co-founded the Society for the Improvement of Psychological Science and Project Implicit. He has been on the faculty of the University of Virginia since 2002.

Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."

<span class="mw-page-title-main">Preregistration (science)</span>

Preregistration is the practice of registering the hypotheses, methods, and/or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection.

<span class="mw-page-title-main">Fiona Fidler</span> Australian professor and lecturer interested in reproducibility and open science.

Fiona Fidler is an Australian professor and lecturer with interests in meta-research, reproducibility, open science, reasoning and decision making and statistical practice. She has held research positions at several universities and across disciplines in conjunction with Australian Research Council (ARC) Centres of Excellence.

Crowdsourced science refers to collaborative contributions of a large group of people to the different steps of the research process in science. In psychology, the nature and scope of the collaborations can vary in their application and in the benefits it offers.

References

  1. Yong, Ed (27 August 2015). "How Reliable Are Psychology Studies?". The Atlantic. Retrieved 7 November 2023.
  2. Nelson, Bryn; Wiles, Austin (15 September 2022). "A troubling lack of replicability for cancer biology studies: After an ambitious project struggled to replicate high‐profile studies, researchers are calling for a new focus on protocol and data sharing as essential steps for building confidence in the field". Cancer Cytopathology. 130 (9): 656–657. doi:10.1002/cncy.22639. ISSN   1934-662X.
  3. 1 2 Loken, Eric (8 April 2019). "The replication crisis is good for science". The Conversation. Retrieved 7 November 2023.
  4. 1 2 Apple, Sam (22 January 2017). "The Young Billionaire Behind the War on Bad Science". Wired.
  5. Weir, Kristen. "A reproducibility crisis?". American Psychological Association. American Psychological Association. Retrieved 24 November 2016.
  6. "Dozens of major cancer studies can't be replicated". Science News. 7 December 2021. Retrieved 19 January 2022.
  7. "Reproducibility Project: Cancer Biology". www.cos.io. Center for Open Science . Retrieved 19 January 2022.
  8. 1 2 Wingen, Tobias; Berkessel, Jana B.; Englich, Birte (24 October 2019). "No Replication, No Trust? How Low Replicability Influences Trust in Psychology". Social Psychological and Personality Science. 11 (4): 454–463. doi:10.1177/1948550619877412. ISSN   1948-5506. S2CID   210383335.
  9. Anvari, Farid; Lakens, Daniël (19 November 2019). "The replicability crisis and public trust in psychological science". Comprehensive Results in Social Psychology. 3 (3): 266–286. doi: 10.1080/23743603.2019.1684822 . ISSN   2374-3603.
  10. "The Replication Crisis Lowers The Public's Trust In Psychology — But Can That Trust Be Built Back Up?". Research Digest. 31 October 2019. Retrieved 30 November 2019.