Representativeness heuristic

Last updated

The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of a known prototypical event. [1] It is one of a group of heuristics (simple rules governing judgment or decision-making) proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". [1] The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant.

Contents

The representativeness heuristic can be a useful shortcut in some cases, but it can also lead to errors in judgment. For example, if we only see a small sample of people from a particular group, we might overestimate the degree to which they are representative of the entire group. Heuristics are described as "judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course." [2] Heuristics are useful because they use effort-reduction and simplification in decision-making. [3]

When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not actually make it more likely. [4] The representativeness heuristic is simply described as assessing similarity of objects and organizing them based around the category prototype (e.g., like goes with like, and causes and effects should resemble each other). [2] This heuristic is used because it is an easy computation. [4] The problem is that people overestimate its ability to accurately predict the likelihood of an event. [5] Thus, it can result in neglect of relevant base rates and other cognitive biases. [6] [7]

Determinants of representativeness

The representativeness heuristic is more likely to be used when the judgement or decision to be made has certain factors.

Similarity

Snap judgement of whether novel object fits an existing category Generalization process using trees.svg
Snap judgement of whether novel object fits an existing category

When judging the representativeness of a new stimulus/event, people usually pay attention to the degree of similarity between the stimulus/event and a standard/process. [1] It is also important that those features be salient. [1] Nilsson, Juslin, and Olsson (2008) found this to be influenced by the exemplar account of memory (concrete examples of a category are stored in memory) so that new instances were classified as representative if highly similar to a category as well as if frequently encountered. [8] Several examples of similarity have been described in the representativeness heuristic literature. This research has focused on medical beliefs. [2] People often believe that medical symptoms should resemble their causes or treatments. For example, people have long believed that ulcers were caused by stress, due to the representativeness heuristic, when in fact bacteria cause ulcers. [2] In a similar line of thinking, in some alternative medicine beliefs patients have been encouraged to eat organ meat that corresponds to their medical disorder. Use of the representativeness heuristic can be seen in even simpler beliefs, such as the belief that eating fatty foods makes one fat. [2] Even physicians may be swayed by the representativeness heuristic when judging similarity, in diagnoses, for example. [9] The researcher found that clinicians use the representativeness heuristic in making diagnoses by judging how similar patients are to the stereotypical or prototypical patient with that disorder. [9]

Randomness

Irregularity and local representativeness affect judgments of randomness. Things that do not appear to have any logical sequence are regarded as representative of randomness and thus more likely to occur. For example, THTHTH as a series of coin tosses would not be considered representative of randomly generated coin tosses as it is too well ordered. [1]

Local representativeness is an assumption wherein people rely on the law of small numbers, whereby small samples are perceived to represent their population to the same extent as large samples ( Tversky & Kahneman 1971 ). A small sample which appears randomly distributed would reinforce the belief, under the assumption of local representativeness, that the population is randomly distributed. Conversely, a small sample with a skewed distribution would weaken this belief.If a coin toss is repeated several times and the majority of the results consists of "heads", the assumption of local representativeness will cause the observer to believe the coin is biased toward "heads". [1]

Tversky and Kahneman's classic studies

Tom W.

In a study done in 1973, [10] Kahneman and Tversky divided their participants into three groups:

The judgments of likelihood were much closer for the judgments of similarity than for the estimated base rates. The findings supported the authors' predictions that people make predictions based on how representative something is (similar), rather than based on relative base rate information. For example, more than 95% of the participants said that Tom would be more likely to study computer science than education or humanities, when there were much higher base rate estimates for education and humanities than computer science. [10]

The taxicab problem

In another study done by Tversky and Kahneman, subjects were given the following problem: [4]

A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue. [4]

A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colours 80% of the time and failed 20% of the time. [4]

What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue? [4]

Most subjects gave probabilities over 50%, and some gave answers over 80%. The correct answer, found using Bayes' theorem, is lower than these estimates: [4]


This result can be achieved by Bayes' theorem which states:

where:

P(x) - a probability of x,

B - the cab was blue,

I - the cab is identified by the witness as blue,

B | I - the cab that is identified as blue, was blue,

I | B - the cab that was blue, is identified by the witness as blue.

TaxicabProblem.png

Representativeness is cited in the similar effect of the gambler's fallacy, the regression fallacy and the conjunction fallacy. [4]

Biases attributed to the representativeness heuristic

Base rate neglect and base rate fallacy

The use of the representativeness heuristic will likely lead to violations of Bayes' Theorem: [11]


However, judgments by representativeness only look at the resemblance between the hypothesis and the data, thus inverse probabilities are equated: [11]

As can be seen, the base rate P(H) is ignored in this equation, leading to the base rate fallacy. A base rate is a phenomenon's basic rate of incidence. The base rate fallacy describes how people do not take the base rate of an event into account when solving probability problems. [12] This was explicitly tested by Dawes, Mirels, Gold and Donahue (1993) who had people judge both the base rate of people who had a particular personality trait and the probability that a person who had a given personality trait had another one. For example, participants were asked how many people out of 100 answered true to the question "I am a conscientious person" and also, given that a person answered true to this question, how many would answer true to a different personality question. They found that participants equated inverse probabilities (e.g., ) even when it was obvious that they were not the same (the two questions were answered immediately after each other). [11]

A medical example is described by Axelsson. Say a doctor performs a test that is 99% accurate, and the patient tests positive for the disease. However, the incidence of the disease is 1/10,000. The patient's actual risk of having the disease is 1%, because the population of healthy people is so much larger than the disease. This statistic often surprises people, due to the base rate fallacy, as many people do not take the basic incidence into account when judging probability. [12] Research by Maya Bar-Hillel (1980) suggests that perceived relevancy of information is vital to base-rate neglect: base rates are only included in judgments if they seem equally relevant to the other information. [13]

Some research has explored base rate neglect in children, as there was a lack of understanding about how these judgment heuristics develop. [14] [15] The authors of one such study wanted to understand the development of the heuristic, if it differs between social judgments and other judgments, and whether children use base rates when they are not using the representativeness heuristic. The authors found that the use of the representativeness heuristic as a strategy begins early on and is consistent. The authors also found that children use idiosyncratic strategies to make social judgments initially, and use base rates more as they get older, but the use of the representativeness heuristic in the social arena also increase as they get older. The authors found that, among the children surveyed, base rates were more readily used in judgments about objects than in social judgments. [15] After that research was conducted, Davidson (1995) was interested in exploring how the representativeness heuristic and conjunction fallacy in children related to children's stereotyping. Consistent with previous research, children based their responses to problems off of base rates when the problems contained nonstereotypic information or when the children were older. There was also evidence that children commit the conjunction fallacy. Finally, as students get older, they used the representativeness heuristic on stereotyped problems, and so made judgments consistent with stereotypes. There is evidence that even children use the representativeness heuristic, commit the conjunction fallacy, and disregard base rates. [14]

Research suggests that use or neglect of base rates can be influenced by how the problem is presented, which reminds us that the representativeness heuristic is not a "general, all purpose heuristic", but may have many contributing factors. [16] Base rates may be neglected more often when the information presented is not causal. [17] Base rates are used less if there is relevant individuating information. [18] Groups have been found to neglect base rate more than individuals do. [19] Use of base rates differs based on context. [20] Research on use of base rates has been inconsistent, with some authors suggesting a new model is necessary. [21]

Conjunction fallacy

A group of undergraduates were provided with a description of Linda, modelled to be representative of an active feminist. Then participants were then asked to evaluate the probability of her being a feminist, the probability of her being a bank teller, or the probability of being both a bank teller and feminist. Probability theory dictates that the probability of being both a bank teller and feminist (the conjunction of two sets) must be less than or equal to the probability of being either a feminist or a bank teller. A conjunction cannot be more probable than one of its constituents. However, participants judged the conjunction (bank teller and feminist) as being more probable than being a bank teller alone. [22] Some research suggests that the conjunction error may partially be due to subtle linguistic factors, such as inexplicit wording or semantic interpretation of "probability". [23] [24] The authors argue that both logic and language use may relate to the error, and it should be more fully investigated. [24]

Disjunction fallacy

From probability theory the disjunction of two events is at least as likely as either of the events individually. For example, the probability of being either a physics or biology major is at least as likely as being a physics major, if not more likely. However, when a personality description (data) seems to be very representative of a physics major (e.g., having a pocket protector) over a biology major, people judge that it is more likely for this person to be a physics major than a natural sciences major (which is a superset of physics). [22]

Evidence that the representativeness heuristic may cause the disjunction fallacy comes from Bar-Hillel and Neter (1993). They found that people judge a person who is highly representative of being a statistics major (e.g., highly intelligent, does math competitions) as being more likely to be a statistics major than a social sciences major (superset of statistics), but they do not think that he is more likely to be a Hebrew language major than a humanities major (superset of Hebrew language). Thus, only when the person seems highly representative of a category is that category judged as more probable than its superordinate category. These incorrect appraisals remained even in the face of losing real money in bets on probabilities. [22]

Insensitivity to sample size

Representativeness heuristic is also employed when subjects estimate the probability of a specific parameter of a sample. If the parameter highly represents the population, the parameter is often given a high probability. This estimation process usually ignores the impact of the sample size.

A concept proposed by Tversky and Kahneman provides an example of this bias in a problem about two hospitals of differing size. [25]

Approximately 45 babies are born in the large hospital while 15 babies are born in the small hospital. Half (50%) of all babies born in general are boys. However, the percentage changes from 1 day to another. For a 1-year period, each hospital recorded the days on which >60% of the babies born were boys. The question posed is: Which hospital do you think recorded more such days?

  • The larger hospital (21)
  • The smaller hospital (21)
  • About the same (that is, within 5% of each other) (53)

The values shown in parentheses are the number of students choosing each answer. [25]

The results show that more than half the respondents selected the wrong answer (third option). This is due to the respondents ignoring the effect of sample size. The respondents selected the third option most likely because the same statistic represents both the large and small hospitals. According to statistical theory, a small sample size allows the statistical parameter to deviate considerably compared to a large sample. [25] Therefore, the large hospital would have a higher probability to stay close to the nominal value of 50%.

Misconceptions of chance and gambler's fallacy

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event (whose occurrences are independent and identically distributed) has occurred less frequently than expected, it is more likely to happen again in the future (or vice versa). The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the expected number of sixes.

The term "Monte Carlo fallacy" originates from an example of the phenomenon, in which the roulette wheel spun black 26 times in succession at the Monte Carlo Casino in 1913. [26]

Regression fallacy

The regression (or regressive) fallacy is an informal fallacy. It assumes that something has returned to normal because of corrective actions taken while it was abnormal. This fails to account for natural fluctuations. It is frequently a special kind of the post hoc fallacy.

See also

Related Research Articles

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event has occurred less frequently than expected, it is more likely to happen again in the future. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the expected number of sixes.

<span class="mw-page-title-main">Cognitive bias</span> Systematic pattern of deviation from norm or rationality in judgment

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.

A heuristic or heuristic technique is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.

Heuristic reasoning is often based on induction, or on analogy ... Induction is the process of discovering general laws  ... Induction tries to find regularity and coherence ... Its most conspicuous instruments are generalization, specialization, analogy. [...] Heuristic discusses human behavior in the face of problems [... that have been] preserved in the wisdom of proverbs.

<span class="mw-page-title-main">Daniel Kahneman</span> Israeli-American psychologist and economist (1934–2024)

Daniel Kahneman was an Israeli-American psychologist best known for his work on the psychology of judgment and decision-making as well as behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences together with Vernon L. Smith. Kahneman's published empirical findings challenge the assumption of human rationality prevailing in modern economic theory. Kahneman became known as the "grandfather of behavioral economics."

<span class="mw-page-title-main">Amos Tversky</span> Israeli psychologist (1937–1996)

Amos Nathan Tversky was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.

The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. This heuristic, operating on the notion that, if something can be recalled, it must be important, or at least more important than alternative solutions not as readily recalled, is inherently biased toward recently acquired information.

<span class="mw-page-title-main">Decision theory</span> Branch of applied probability theory

Decision theory or the theory of rational choice is a branch of probability, economics, and analytic philosophy that uses the tools of expected utility and probability to model how individuals would behave rationally under uncertainty. It differs from the cognitive and behavioral sciences in that it is mainly prescriptive and concerned with identifying optimal decisions for a rational agent, rather than describing how people actually make decisions. Despite this, the field is important to the study of real human behavior by social scientists, as it lays the foundations to mathematically model and analyze individuals in fields such as sociology, economics, criminology, cognitive science, and political science.

The conjunction fallacy is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.

<span class="mw-page-title-main">Base rate fallacy</span> Logic error due to ignoring the base rate

The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate in favor of the individuating information. For example, if someone hears that a friend is very shy and quiet, they might think the friend is more likely to be a librarian than a salesperson. However, there are far more salespeople than librarians overall - hence making it more likely that their friend is actually a salesperson, even if a greater proportion of librarians fit the description of being shy and quiet. Base rate neglect is a specific form of the more general extension neglect.

<span class="mw-page-title-main">Planning fallacy</span> Cognitive bias of underestimating time needed

The planning fallacy is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias and underestimate the time needed. This phenomenon sometimes occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias affects predictions only about one's own tasks. On the other hand, when outside observers predict task completion times, they tend to exhibit a pessimistic bias, overestimating the time needed. The planning fallacy involves estimates of task completion times more optimistic than those encountered in similar projects in the past.

<span class="mw-page-title-main">Simulation heuristic</span> Mental strategy

The simulation heuristic is a psychological heuristic, or simplified mental strategy, according to which people determine the likelihood of an event based on how easy it is to picture the event mentally. Partially as a result, people experience more regret over outcomes that are easier to imagine, such as "near misses". The simulation heuristic was first theorized by psychologists Daniel Kahneman and Amos Tversky as a specialized adaptation of the availability heuristic to explain counterfactual thinking and regret. However, it is not the same as the availability heuristic. Specifically the simulation heuristic is defined as "how perceivers tend to substitute normal antecedent events for exceptional ones in psychologically 'undoing' this specific outcome."

The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.

In psychology, a heuristic is an easy-to-compute procedure or rule of thumb that people use when forming beliefs, judgments or decisions. The familiarity heuristic was developed based on the discovery of the availability heuristic by psychologists Amos Tversky and Daniel Kahneman; it happens when the familiar is favored over novel places, people, or things. The familiarity heuristic can be applied to various situations that individuals experience in day-to-day life. When these situations appear similar to previous situations, especially if the individuals are experiencing a high cognitive load, they may regress to the state of mind in which they have felt or behaved before. This heuristic is useful in most situations and can be applied to many fields of knowledge; however, there are both positives and negatives to this heuristic as well.

In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.

Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.

Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.

<i>Thinking, Fast and Slow</i> 2011 book by Daniel Kahneman

Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman. The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical.

Insensitivity to sample size is a cognitive bias that occurs when people judge the probability of obtaining a sample statistic without respect to the sample size. For example, in one study, subjects assigned the same probability to the likelihood of obtaining a mean height of above six feet [183 cm] in samples of 10, 100, and 1,000 men. In other words, variation is more likely in smaller samples, but people may not expect this.

Illusion of validity is a cognitive bias in which a person overestimates their ability to interpret and predict accurately the outcome when analyzing a set of data, in particular when the data analyzed show a very consistent pattern—that is, when the data "tell" a coherent story.

Intuitive statistics, or folk statistics, is the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

References

  1. 1 2 3 4 5 6 Kahneman & Tversky 1972
  2. 1 2 3 4 5 Gilovich, Thomas; Savitsky, Kenneth (1996). "Like Goes with Like: The Role of Representativeness in Erroneous and Pseudo-Scientific Beliefs" (PDF). Skeptical Inquirer. 20 (2): 34–40. doi:10.1017/CBO9780511808098.036. Archived from the original (PDF) on 2014-11-04.
  3. Shah, Anuj K.; Oppenheimer, Daniel M. (2008). "Heuristics made easy: An effort-reduction framework". Psychological Bulletin. 134 (2): 207–222. doi:10.1037/0033-2909.134.2.207. PMID   18298269.
  4. 1 2 3 4 5 6 7 8 Tversky & Kahneman 1982
  5. Fortune, Erica E.; Goodie, Adam S. (2012). "Cognitive distortions as a component and treatment focus of pathological gambling: A review". Psychology of Addictive Behaviors. 26 (2): 298–310. doi:10.1037/a0026422. PMID   22121918.
  6. Tversky & Kahneman 1974.
  7. Nisbett, Richard E.; Ross, Lee (1980). Human inference: strategies and shortcomings of social judgment . Prentice-Hall. pp.  115–118. ISBN   978-0-13-445073-5.
  8. Nilsson, Håkan; Juslin, Peter; Olsson, Henrik (2008). "Exemplars in the mist: The cognitive substrate of the representativeness heuristic". Scandinavian Journal of Psychology . 49 (3): 201–212. doi:10.1111/j.1467-9450.2008.00646.x. PMID   18419587.
  9. 1 2 Garb, Howard N. (1996). "The representativeness and past-behavior heuristics in clinical judgment". Professional Psychology: Research and Practice. 27 (3): 272–277. doi:10.1037/0735-7028.27.3.272.
  10. 1 2 3 4 5 Kahneman & Tversky 1973.
  11. 1 2 3 Dawes, Robyn M.; Mirels, Herbert L.; Gold, Eric; Donahue, Eileen (1993). "Equating inverse probabilities in implicit personality judgments". Psychological Science. 4 (6): 396–400. doi:10.1111/j.1467-9280.1993.tb00588.x. S2CID   143928040.
  12. 1 2 Axelsson, Stefan (2000). "The base-rate fallacy and the difficulty of intrusion detection". ACM Transactions on Information and System Security. 3 (3): 186–205. CiteSeerX   10.1.1.133.3797 . doi:10.1145/357830.357849. S2CID   11421548.
  13. Bar-Hillel, Maya (1980). "The base-rate fallacy in probability judgments" (PDF). Acta Psychologica. 44 (3): 211–233. doi:10.1016/0001-6918(80)90046-3.
  14. 1 2 Davidson, Denise (1995). "The representativeness heuristic and the conjunction fallacy effect in children's decision making". Merrill-Palmer Quarterly. 41 (3): 328–346. JSTOR   23087893.
  15. 1 2 Jacobs, Janis E.; Potenza, Maria (1991). "The Use of Judgement Heuristics to Make Social and Object Decisions: A Developmental Perspective". Child Development. 62 (1): 166–178. doi:10.1111/j.1467-8624.1991.tb01522.x.
  16. Gigerenzer, Gerd; Hell, Wolfgang; Blank, Hartmut (1988). "Presentation and content: The use of base rates as a continuous variable". Journal of Experimental Psychology: Human Perception and Performance. 14 (3): 513–525. CiteSeerX   10.1.1.318.6320 . doi:10.1037/0096-1523.14.3.513.
  17. Ajzen, Icek (1977). "Intuitive theories of events and the effects of base-rate information on prediction". Journal of Personality and Social Psychology. 35 (5): 303–314. doi:10.1037/0022-3514.35.5.303.
  18. Koehler, Jonathan J. (1996). "The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges". Behavioral and Brain Sciences. 19 (1): 1–17. doi:10.1017/S0140525X00041157. S2CID   53343238.
  19. Argote, Linda; Seabright, Mark A; Dyer, Linda (1986). "Individual versus group use of base-rate and individuating information". Organizational Behavior and Human Decision Processes. 38 (1): 65–75. doi:10.1016/0749-5978(86)90026-9.
  20. Zukier, Henri; Pepitone, Albert (1984). "Social roles and strategies in prediction: Some determinants of the use of base-rate information". Journal of Personality and Social Psychology. 47 (2): 349–360. doi:10.1037/0022-3514.47.2.349.
  21. Medin, Douglas L.; Edelson, Stephen M. (1988). "Problem structure and the use of base-rate information from experience". Journal of Experimental Psychology: General. 117 (1): 68–85. doi:10.1037/0096-3445.117.1.68. PMID   2966231.
  22. 1 2 3 Tversky & Kahneman 1983.
  23. Fiedler, Klaus (1988). "The dependence of the conjunction fallacy on subtle linguistic factors". Psychological Research. 50 (2): 123–129. doi:10.1007/BF00309212. S2CID   144369350.
  24. 1 2 Politzer, Guy; Noveck, Ira A. (1991). "Are conjunction rule violations the result of conversational rule violations?". Journal of Psycholinguistic Research. 20 (2): 83–103. doi:10.1007/BF01067877. S2CID   143726019.
  25. 1 2 3 AlKhars, Mohammed; Evangelopoulos, Nicholas; Pavur, Robert; Kulkarni, Shailesh (2019-04-10). "Cognitive biases resulting from the representativeness heuristic in operations management: an experimental investigation". Psychology Research and Behavior Management. Retrieved 2021-04-28.
  26. "Why we gamble like monkeys". BBC.com. 2015-01-02.

Works by Kahneman and Tversky

General references