Conjunction fallacy

Last updated

The conjunction fallacy (also known as the Linda problem) is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.

Contents

Definition and basic example

I am particularly fond of this example [the Linda problem] because I know that the [conjoint] statement is least probable, yet a little homunculus in my head continues to jump up and down, shouting at me—"but she can't just be a bank teller; read the description."

Stephen J. Gould [1]

The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman. [2] [3] [4]

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

The majority of those asked chose option 2. However, the probability of two events occurring together (that is, in conjunction) is always less than or equal to the probability of either one occurring itself—formally, for two events A and B this inequality could be written as and .

For example, even choosing a very low probability of Linda's being a bank teller, say Pr(Linda is a bank teller) = 0.05 and a high probability that she would be a feminist, say Pr(Linda is a feminist) = 0.95, then, assuming these two facts are independent of each other, Pr(Linda is a bank teller and Linda is a feminist) = 0.05 × 0.95 or 0.0475, lower than Pr(Linda is a bank teller).

Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic (an easily calculated) procedure called representativeness to make this kind of judgment: Option 2 seems more "representative" of Linda from the description of her, even though it is clearly mathematically less likely. [4]

In other demonstrations, they argued that a specific scenario seemed more likely because of representativeness, but each added detail would actually make the scenario less and less likely. In this way it could be similar to the misleading vividness or slippery slope fallacies. More recently[ when? ] Kahneman has argued that the conjunction fallacy is a type of extension neglect. [5]

Joint versus separate evaluation

In some experimental demonstrations, the conjoint option is evaluated separately from its basic option. In other words, one group of participants is asked to rank-order the likelihood that Linda is a bank teller, a high school teacher, and several other options, and another group is asked to rank-order whether Linda is a bank teller and active in the feminist movement versus the same set of options (without "Linda is a bank teller" as an option). In this type of demonstration, different groups of subjects still rank-order Linda as a bank teller and active in the feminist movement more highly than Linda as a bank teller. [4]

Separate evaluation experiments preceded the earliest joint evaluation experiments, and Kahneman and Tversky were surprised when the effect was observed even under joint evaluation. [6]

In separate evaluation, the term conjunction effect may be preferred. [4]

Other examples

While the Linda problem is the best-known example, researchers have developed dozens of problems that reliably elicit the conjunction fallacy.

Tversky & Kahneman (1981)

The original report by Tversky & Kahneman [2] (later republished as a book chapter [3] ) described four problems that elicited the conjunction fallacy, including the Linda problem. There was also a similar problem about a man named Bill (a good fit for the stereotype of an accountant — "intelligent, but unimaginative, compulsive, and generally lifeless" — but not a good fit for the stereotype of a jazz player), and two problems where participants were asked to make predictions for events that could occur in 1981.

Policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United States would break off diplomatic relations, all in the following year. They rated it on average as having a 4% probability of occurring. Another group of experts was asked to rate the probability simply that the United States would break off relations with the Soviet Union in the following year. They gave it an average probability of only 1%.

In an experiment conducted in 1980, respondents were asked the following:

Suppose Björn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes from most to least likely.

  • Borg will win the match
  • Borg will lose the first set
  • Borg will lose the first set but win the match
  • Borg will win the first set but lose the match

On average, participants rated "Borg will lose the first set but win the match" more likely than "Borg will lose the first set". However, winning the match is only one of several potential eventual outcomes after having lost the first set. The first and the second outcome are thus more likely (as they only contain one condition) than the third and fourth outcome (which depend on two conditions).

Tversky & Kahneman (1983)

Tversky and Kahneman followed up their original findings with a 1983 paper [4] that looked at dozens of new problems, most of these with multiple variations. The following are a couple of examples.

Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you choose appears on successive rolls of the die.

  1. RGRRR
  2. GRGRRR
  3. GRRRRR

65% of participants chose the second sequence, though option 1 is contained within it and is shorter than the other options. In a version where the $25 bet was only hypothetical the results did not significantly differ. Tversky and Kahneman argued that sequence 2 appears "representative" of a chance sequence [4] (compare to the clustering illusion ).

A health survey was conducted in a representative sample of adult males in British Columbia of all ages and occupations.

Mr. F. was included in the sample. He was selected by chance from the list of participants.

Which of the following statements is more probable? (check one)

  1. Mr. F. has had one or more heart attacks.
  2. Mr. F. has had one or more heart attacks and he is over 55 years old.

The probability of the conjunctions is never greater than that of its conjuncts. Therefore, the first choice is more probable.

Criticism

Critics such as Gerd Gigerenzer and Ralph Hertwig criticized the Linda problem on grounds such as the wording and framing. The question of the Linda problem may violate conversational maxims in that people assume that the question obeys the maxim of relevance. Gigerenzer argues that some of the terminology used have polysemous meanings, the alternatives of which he claimed were more "natural". He argues that one meaning of probable ("what happens frequently") corresponds to the mathematical probability people are supposed to be tested on, but other meanings ("what is plausible" and "whether there is evidence") do not. [7] [8] The term "and" has even been argued to have relevant polysemous meanings. [9] Many techniques have been developed to control for this possible misinterpretation, but none of them has dissipated the effect. [10] [11]

Many variations in wording of the Linda problem were studied by Tversky and Kahneman. [4] If the first option is changed to obey conversational relevance, i.e., "Linda is a bank teller whether or not she is active in the feminist movement" the effect is decreased, but the majority (57%) of the respondents still commit the conjunction error. If the probability is changed to frequency format (see debiasing section below) the effect is reduced or eliminated. However, studies exist in which indistinguishable conjunction fallacy rates have been observed with stimuli framed in terms of probabilities versus frequencies. [12]

The wording criticisms may be less applicable to the conjunction effect in separate evaluation.[ vague ] [7] The "Linda problem" has been studied and criticized more than other types of demonstration of the effect (some described below). [6] [9] [13]

In an incentivized experimental study, it has been shown that the conjunction fallacy decreased in those with greater cognitive ability, though it did not disappear. [14] It has also been shown that the conjunction fallacy becomes less prevalent when subjects are allowed to consult with other subjects. [15]

Still, the conjunction fallacy occurs even when people are asked to make bets with real money, [16] and when they solve intuitive physics problems of various designs. [17]

Debiasing

Drawing attention to set relationships, using frequencies instead of probabilities, and/or thinking diagrammatically sharply reduce the error in some forms of the conjunction fallacy. [4] [8] [9] [18]

In one experiment the question of the Linda problem was reformulated as follows:

There are 100 persons who fit the description above (that is, Linda's). How many of them are:

Whereas previously 85% of participants gave the wrong answer (bank teller and active in the feminist movement), in experiments done with this questioning none of the participants gave a wrong answer. [18] Participants were forced to use a mathematical approach and thus recognized the difference more easily.

However, in some tasks only based on frequencies, not on stories, that used clear logical formulations, conjunction fallacies continued to occur dominantly, with only few exceptions, when the observed pattern of frequencies resembled a conjunction. [19]

Related Research Articles

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event has occurred more frequently than expected, it is less likely to happen again in the future. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the expected number of sixes.

<span class="mw-page-title-main">Cognitive bias</span> Systematic pattern of deviation from norm or rationality in judgment

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.

A heuristic, or heuristic technique, is any approach to problem solving that employs a practical method that is not fully optimized, perfected, or rationalized, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.

<span class="mw-page-title-main">Amos Tversky</span> Israeli psychologist (1937–1996)

Amos Nathan Tversky was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.

<span class="mw-page-title-main">Prospect theory</span> Theory of behavioral economics

Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.

The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. This heuristic, operating on the notion that, if something can be recalled, it must be important, or at least more important than alternative solutions not as readily recalled, is inherently biased toward recently acquired information.

The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of known prototypical event. It is one of a group of heuristics proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant.

<span class="mw-page-title-main">Base rate fallacy</span> Error in thinking which involves under-valuing base rate information

The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate in favor of the individuating information. Base rate neglect is a specific form of the more general extension neglect.

<span class="mw-page-title-main">Gerd Gigerenzer</span> German cognitive psychologist

Gerd Gigerenzer is a German psychologist who has studied the use of bounded rationality and heuristics in decision making. Gigerenzer is director emeritus of the Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute for Human Development and director of the Harding Center for Risk Literacy, both in Berlin.

In prospect theory, the pseudocertainty effect is the tendency for people to perceive an outcome as certain while it is actually uncertain in multi-stage decision making. The evaluation of the certainty of the outcome in a previous stage of decisions is disregarded when selecting an option in subsequent stages. Not to be confused with certainty effect, the pseudocertainty effect was discovered from an attempt at providing a normative use of decision theory for the certainty effect by relaxing the cancellation rule.

The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.

Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.

The frequency format hypothesis is the idea that the brain understands and processes information better when presented in frequency formats rather than a numerical or probability format. Thus according to the hypothesis, presenting information as 1 in 5 people rather than 20% leads to better comprehension. The idea was proposed by German scientist Gerd Gigerenzer, after compilation and comparison of data collected between 1976 and 1997.

<i>Thinking, Fast and Slow</i> 2011 book by Daniel Kahneman

Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman. The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical.

Insensitivity to sample size is a cognitive bias that occurs when people judge the probability of obtaining a sample statistic without respect to the sample size. For example, in one study, subjects assigned the same probability to the likelihood of obtaining a mean height of above six feet [183 cm] in samples of 10, 100, and 1,000 men. In other words, variation is more likely in smaller samples, but people may not expect this.

Social heuristics are simple decision making strategies that guide people's behavior and decisions in the social environment when time, information, or cognitive resources are scarce. Social environments tend to be characterised by complexity and uncertainty, and in order to simplify the decision-making process, people may use heuristics, which are decision making strategies that involve ignoring some information or relying on simple rules of thumb.

Intuitive statistics, or folk statistics, is the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

The priority heuristic is a simple, lexicographic decision strategy that helps decide for a good option.

<span class="mw-page-title-main">Ralph Hertwig</span> German psychologist

Ralph Hertwig is a German psychologist whose work focuses on the psychology of human judgment and decision making. Hertwig is Director of the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin, Germany. He grew up with his brothers Steffen Hertwig and Michael Hertwig in Talheim, Heilbronn.

References

  1. Gould, Stephen J. (1988). "The Streak of Streaks". The New York Review of Books.
  2. 1 2 Tversky, Amos; Kahneman, Daniel (1981). Judgments of and by representativeness (Report). Stanford University.
  3. 1 2 Tversky, A.; Kahneman, D. (1982). "Judgments of and by representativeness". In Kahneman, D.; Slovic, P.; Tversky, A. (eds.). Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge University Press. ISBN   0-521-28414-7.
  4. 1 2 3 4 5 6 7 8 Tversky, Amos; Kahneman, Daniel (October 1983). "Extension versus intuitive reasoning: The conjunction fallacy in probability judgment". Psychological Review . 90 (4): 293–315. doi:10.1037/0033-295X.90.4.293. Archived from the original on 2013-02-23.
  5. Kahneman, Daniel (2000). "Evaluation by moments, past and future". In Kahneman, Daniel; Tversky, Amos (eds.). Choices, Values and Frames. Cambridge University Press. ISBN   0-521-62749-4.
  6. 1 2 Kahneman, Daniel (2011). "Linda: Less is More". Thinking, Fast and Slow . New York: Farrar, Straus and Giroux. pp.  156–165.
  7. 1 2 Gigerenzer, Gerd (1996). "On narrow norms and vague heuristics: A reply to Kahneman and Tversky". Psychological Review. 103 (3): 592–596. CiteSeerX   10.1.1.314.996 . doi:10.1037/0033-295X.103.3.592.
  8. 1 2 Hertwig, Ralph; Gigerenzer, Gerd (1999). "The 'Conjunction Fallacy' Revisited: How Intelligent Inferences Look Like Reasoning Errors". Journal of Behavioral Decision Making. 12 (4): 275–305. CiteSeerX   10.1.1.157.8726 . doi:10.1002/(sici)1099-0771(199912)12:4<275::aid-bdm323>3.3.co;2-d. S2CID   15453720.
  9. 1 2 3 Mellers, B.; Hertwig, R.; Kahneman, D. (2001). "Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration" (PDF). Psychological Science. 12 (4): 269–275. doi:10.1111/1467-9280.00350. hdl: 11858/00-001M-0000-0025-957F-D . PMID   11476091. S2CID   38522595.
  10. Moro, Rodrigo (2009). "On the nature of the conjunction fallacy". Synthese. 171 (1): 1–24. doi:10.1007/s11229-008-9377-8. hdl: 11336/69232 . S2CID   207244869.
  11. Tentori, Katya; Crupi, Vincenzo (2012). "On the conjunction fallacy and the meaning of and, yet again: A reply to Hertwig, Benz, and Krauss" (PDF). Cognition. 122 (2): 123–134. doi:10.1016/j.cognition.2011.09.002. PMID   22079517. S2CID   6192639. Archived (PDF) from the original on 2016-05-10.
  12. See, for example: Tentori, Katya; Bonini, Nicolao; Osherson, Daniel (2004). "The conjunction fallacy: a misunderstanding about conjunction?". Cognitive Science. 28 (3): 467–477. doi: 10.1207/s15516709cog2803_8 . Or: Wedell, Douglas H.; Moro, Rodrigo (2008). "Testing boundary conditions for the conjunction fallacy: Effects of response mode, conceptual focus, and problem type". Cognition. 107 (1): 105–136. doi:10.1016/j.cognition.2007.08.003. PMID   17927971. S2CID   17197695.
  13. Kahneman, Daniel; Tversky, Amos (1996). "On the reality of cognitive illusions". Psychological Review. 103 (3): 582–591. CiteSeerX   10.1.1.174.5117 . doi:10.1037/0033-295X.103.3.582. PMID   8759048.
  14. Oechssler, Jörg; Roider, Andreas; Schmitz, Patrick W. (2009). "Cognitive abilities and behavioral biases" (PDF). Journal of Economic Behavior & Organization. 72 (1): 147–152. doi:10.1016/j.jebo.2009.04.018.
  15. Charness, Gary; Karni, Edi; Levin, Dan (2010). "On the conjunction fallacy in probability judgment: New experimental evidence regarding Linda". Games and Economic Behavior. 68 (2): 551–556. CiteSeerX   10.1.1.153.3553 . doi:10.1016/j.geb.2009.09.003. hdl:10419/49905.
  16. Sides, Ashley; Osherson, Daniel; Bonini, Nicolao; Viale, Riccardo (2002). "On the reality of the conjunction fallacy". Memory & Cognition. 30 (2): 191–198. doi: 10.3758/BF03195280 . PMID   12035881. S2CID   1650529.
  17. Ludwin-Peery, Ethan; Bramley, Neil; Davis, Ernest; Gureckis, Todd (2020). "Broken Physics: A Conjunction-Fallacy Effect in Intuitive Physical Reasoning". Psychological Science. 31 (12): 1602–1611. doi:10.1177/0956797620957610. hdl:20.500.11820/ffe59a49-8a8b-4def-9281-baa4c7653fba. PMID   33137265. S2CID   220479849.
  18. 1 2 Gigerenzer, G. (1991). "How to make cognitive illusions disappear: Beyond 'heuristics and biases.'". European Review of Social Psychology. 2 (1): 83–115. CiteSeerX   10.1.1.336.9826 . doi:10.1080/14792779143000033.
  19. von Sydow, M. (2011). "The Bayesian Logic of Frequency-Based Conjunction Fallacies". Journal of Mathematical Psychology. 55 (2): 119–139. doi:10.1016/j.jmp.2010.12.001.