In science, adversarial collaboration is a modality of collaboration wherein opposing views work together in order to jointly advance knowledge of the area under dispute. This can take the form of a scientific experiment conducted by two groups of experimenters with competing hypotheses, with the aim of constructing and implementing an experimental design in a way that satisfies both groups that there are no obvious biases or weaknesses in the experimental design. [1] Adversarial collaboration can involve a neutral moderator [2] and lead to a co-designed experiment and joint publishing of findings in order to resolve differences. [3] With its emphasis on transparency throughout the research process, adversarial collaboration has been described as sitting within the open science framework. [4]
One of the earliest modern examples of adversarial collaboration was a 1988 collaboration between Erez and Latham with Edwin Locke working as a neutral third party. This collaboration came about as the result of a disagreement from the field of Goal-Setting research between Erez and Latham on an aspect of goal-setting research around the effect of participation on goal commitment and performance. Latham and Erez designed four experiments which explained the differences between their individual findings, but did not coin the term adversarial collaboration. [2] Independently, to Erez, Locke and Latham whose work he was unaware of, [5] Daniel Kahneman developed a similar protocol for adversarial collaboration around ten years later and may have been the first to use the term adversarial collaboration. [6] More recently, Clark and Tetlock have proposed adversarial collaboration as a vehicle for improving how science can self-correct through exploring rival hypotheses which will ultimately expose false claims. [7] Their work has led to the University of Pennsylvania School of Arts & Sciences creating the Adversarial Collaboration Project [8] which seeks to encourage the use of adversarial collaboration as a research approach to address a variety of research questions. [9]
Adversarial collaboration has been recommended by Daniel Kahneman [10] and others as a way of reducing the distorting impact of cognitive-motivational biases on human reasoning [11] and resolving contentious issues in fringe science. [12] It has also been recommended as a potential solution for improving academic commentaries. [13]
Philip Tetlock and Gregory Mitchell have discussed it in various articles. They argue:
Adversarial collaboration is most feasible when least needed: when the clashing camps have advanced testable theories, subscribe to common canons for testing those theories, and disagreements are robust but respectful. And adversarial collaboration is least feasible when most needed: when the scientific community lacks clear criteria for falsifying points of view, disagrees on key methodological issues, relies on second- or third-best substitute methods for testing causality, and is fractured into opposing camps that engage in ad hominem posturing and that have intimate ties to political actors who see any concession as weakness. [14]
Controversy is a state of prolonged public dispute or debate, usually concerning a matter of conflicting opinion or point of view. The word was coined from the Latin controversia, as a composite of controversus – "turned in an opposite direction".
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.
Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Confirmation bias is insuperable for most people, but they can manage it, for example, by education and training in critical thinking skills.
A heuristic, or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.
Daniel Kahneman is an Israeli-American author, psychologist and economist notable for his work on hedonic psychology, psychology of judgment and decision-making. He is also known for his work in behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences. His empirical findings challenge the assumption of human rationality prevailing in modern economic theory.
Amos Nathan Tversky was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.
The peak–end rule is a psychological heuristic in which people judge an experience largely based on how they felt at its peak and at its end, rather than based on the total sum or average of every moment of the experience. The effect occurs regardless of whether the experience is pleasant or unpleasant. To the heuristic, other information aside from that of the peak and end of the experience is not lost, but it is not used. This includes net pleasantness or unpleasantness and how long the experience lasted. The peak–end rule is thereby a specific form of the more general extension neglect and duration neglect.
The conjunction fallacy is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.
The anchoring effect is a psychological phenomenon in which an individual's judgements or decisions are influenced by a reference point or "anchor" which can be completely irrelevant. Both numeric and non-numeric anchoring have been reported in research. In numeric anchoring, once the value of the anchor is set, subsequent arguments, estimates, etc. made by an individual may change from what they would have otherwise been without the anchor. For example, an individual may be more likely to purchase a car if it is placed alongside a more expensive model. Prices discussed in negotiations that are lower than the anchor may seem reasonable, perhaps even cheap to the buyer, even if said prices are still relatively higher than the actual market value of the car. Another example may be when estimating the orbit of Mars, one might start with the Earth's orbit and then adjust upward until they reach a value that seems reasonable.
The planning fallacy is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias and underestimate the time needed. This phenomenon sometimes occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias affects predictions only about one's own tasks. On the other hand, when outside observers predict task completion times, they tend to exhibit a pessimistic bias, overestimating the time needed. The planning fallacy involves estimates of task completion times more optimistic than those encountered in similar projects in the past.
Goal setting involves the development of an action plan designed in order to motivate and guide a person or group toward a goal. Goals are more deliberate than desires and momentary intentions. Therefore, setting goals means that a person has committed thought, emotion, and behavior towards attaining the goal. In doing so, the goal setter has established a desired future state which differs from their current state thus creating a mismatch which in turn spurs future actions. Goal setting can be guided by goal-setting criteria such as SMART criteria. Goal setting is a major component of personal-development and management literature. Studies by Edwin A. Locke and his colleagues, most notably, Gary Latham have shown that more specific and ambitious goals lead to more performance improvement than easy or general goals. The goals should be specific, time constrained and difficult. Vague goals reduce limited attention resources. Unrealistically short time limits intensify the difficulty of the goal outside the intentional level and disproportionate time limits are not encouraging. Difficult goals should be set ideally at the 90th percentile of performance,assuming that motivation and not ability is limiting attainment of that level of performance. As long as the person accepts the goal, has the ability to attain it, and does not have conflicting goals, there is a positive linear relationship between goal difficulty and task performance.
The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.
A goal or objective is an idea of the future or desired result that a person or a group of people envision, plan and commit to achieve. People endeavour to reach goals within a finite time by setting deadlines.
Edwin A. Locke is an American psychologist and a pioneer in goal-setting theory. He is a retired Dean's Professor of Motivation and Leadership at the Robert H. Smith School of Business at the University of Maryland, College Park. He was also affiliated with the Department of Psychology. As stated by the Association for Psychological Science, "Locke is the most published organizational psychologist in the history of the field. His pioneering research has advanced and enriched our understanding of work motivation and job satisfaction. The theory that is synonymous with his name—goal-setting theory—is perhaps the most widely-respected theory in industrial-organizational psychology. His 1976 chapter on job satisfaction continues to be one of the most highly-cited pieces of work in the field."
Philip E. Tetlock is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences. He was elected a Member of the American Philosophical Society in 2019.
Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.
Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman. The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical.
The Good Judgment Project (GJP) is an organization dedicated to "harnessing the wisdom of the crowd to forecast world events". It was co-created by Philip E. Tetlock, decision scientist Barbara Mellers, and Don Moore, all professors at the University of Pennsylvania.
Debiasing is the reduction of bias, particularly with respect to judgment and decision making. Biased judgment and decision making is that which systematically deviates from the prescriptions of objective standards such as facts, logic, and rational behavior or prescriptive norms. Biased judgment and decision making exists in consequential domains such as medicine, law, policy, and business, as well as in everyday life. Investors, for example, tend to hold onto falling stocks too long and sell rising stocks too quickly. Employers exhibit considerable discrimination in hiring and employment practices, and some parents continue to believe that vaccinations cause autism despite knowing that this link is based on falsified evidence. At an individual level, people who exhibit less decision bias have more intact social environments, reduced risk of alcohol and drug use, lower childhood delinquency rates, and superior planning and problem solving abilities.