Base rate fallacy

Last updated

The base rate fallacy, also called base rate neglect [1] or base rate bias, is a type of fallacy. If presented with related base rate information (i.e., general information on prevalence) and specific information (i.e., information pertaining only to a specific case), people tend to ignore the base rate in favor of the individuating information, rather than correctly integrating the two. [2]

Contents

Base rate neglect is a specific form of the more general extension neglect.

False positive paradox

An example of the base rate fallacy is the false positive paradox. This paradox describes situations where there are more false positive test results than true positives. For example, 50 of 1,000 people test positive for an infection, but only 10 have the infection, meaning 40 tests were false positives. The probability of a positive test result is determined not only by the accuracy of the test but also by the characteristics of the sampled population. [3] When the prevalence, the proportion of those who have a given condition, is lower than the test's false positive rate, even tests that have a very low chance of giving a false positive in an individual case will give more false than true positives overall. [4] The paradox surprises most people. [5]

It is especially counter-intuitive when interpreting a positive result in a test on a low-prevalence population after having dealt with positive results drawn from a high-prevalence population. [4] If the false positive rate of the test is higher than the proportion of the new population with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population may conclude from experience that a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred.

Examples

Example 1: Disease

High-incidence population

Number
of people
InfectedUninfectedTotal
Test
positive
400
(true positive)
30
(false positive)
430
Test
negative
0
(false negative)
570
(true negative)
570
Total4006001000

Imagine running an infectious disease test on a population A of 1000 persons, in which 40% are infected. The test has a false positive rate of 5% (0.05) and no false negative rate. The expected outcome of the 1000 tests on population A would be:

Infected and test indicates disease (true positive)
1000 × 40/100 = 400 people would receive a true positive
Uninfected and test indicates disease (false positive)
1000 × 100 – 40/100 × 0.05 = 30 people would receive a false positive
The remaining 570 tests are correctly negative.

So, in population A, a person receiving a positive test could be over 93% confident (400/30 + 400) that it correctly indicates infection.

Low-incidence population

Number
of people
InfectedUninfectedTotal
Test
positive
20
(true positive)
49
(false positive)
69
Test
negative
0
(false negative)
931
(true negative)
931
Total209801000

Now consider the same test applied to population B, in which only 2% is infected. The expected outcome of 1000 tests on population B would be:

Infected and test indicates disease (true positive)
1000 × 2/100 = 20 people would receive a true positive
Uninfected and test indicates disease (false positive)
1000 × 100 – 2/100 × 0.05 = 49 people would receive a false positive
The remaining 931 (= 1000 - (49 + 20)) tests are correctly negative.

In population B, only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% (20/20 + 49) for a test that otherwise appears to be "95% accurate".

A tester with experience of group A might find it a paradox that in group B, a result that had usually correctly indicated infection is now usually a false positive. The confusion of the posterior probability of infection with the prior probability of receiving a false positive is a natural error after receiving a health-threatening test result.

Example 2: Drunk drivers

A group of police officers have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them. How high is the probability they really are drunk?

Many would answer as high as 95%, but the correct probability is about 2%.

An explanation for this is as follows: on average, for every 1,000 drivers tested,

Therefore, the probability that one of the drivers among the 1 + 49.95 = 50.95 positive test results really is drunk is .

The validity of this result does, however, hinge on the validity of the initial assumption that the police officer stopped the driver truly at random, and not because of bad driving. If that or another non-arbitrary reason for stopping the driver was present, then the calculation also involves the probability of a drunk driver driving competently and a non-drunk driver driving (in-)competently.

More formally, the same probability of roughly 0.02 can be established using Bayes's theorem. The goal is to find the probability that the driver is drunk given that the breathalyzer indicated they are drunk, which can be represented as

where D means that the breathalyzer indicates that the driver is drunk. Bayes's theorem tells us that

We were told the following in the first paragraph:

and

As you can see from the formula, one needs p(D) for Bayes' theorem, which one can compute from the preceding values using the law of total probability:

which gives

Plugging these numbers into Bayes' theorem, one finds that

Example 3: Terrorist identification

In a city of 1 million inhabitants let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software.

The software has two failure rates of 1%:

Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T | B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the 'base rate fallacy' would infer that there is a 99% chance that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the chances they are a terrorist are actually near 1%, not near 99%.

The fallacy arises from confusing the natures of two different failure rates. The 'number of non-bells per 100 terrorists' and the 'number of non-terrorists per 100 bells' are unrelated quantities. One does not necessarily equal the other, and they don't even have to be almost equal. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The 'number of non-terrorists per 100 bells' in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.

Imagine that the first city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So, the probability that a person triggering the alarm actually is a terrorist, is only about 99 in 10,098, which is less than 1%, and very, very far below our initial guess of 99%.

The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists, and the number of false positives (non-terrorists scanned as terrorists) is so much larger than the true positives (terrorists scanned as terrorists).

Findings in psychology

In experiments, people have been found to prefer individuating information over general information when the former is available. [6] [7] [8]

In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student even if the new descriptive information was obviously of little or no relevance to school performance. [7] This finding has been used to argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics.

Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of a simple rule or "heuristic" called representativeness. They argued that many judgments relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category. [7] Kahneman considers base rate neglect to be a specific form of extension neglect. [9] Richard Nisbett has argued that some attributional biases like the fundamental attribution error are instances of the base rate fallacy: people do not use the "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simpler dispositional attributions. [10]

There is considerable debate in psychology on the conditions under which people do or do not appreciate base rate information. [11] [12] Researchers in the heuristics-and-biases program have stressed empirical findings showing that people tend to ignore base rates and make inferences that violate certain norms of probabilistic reasoning, such as Bayes' theorem. The conclusion drawn from this line of research was that human probabilistic thinking is fundamentally flawed and error-prone. [13] Other researchers have emphasized the link between cognitive processes and information formats, arguing that such conclusions are not generally warranted. [14] [15]

Consider again Example 2 from above. The required inference is to estimate the (posterior) probability that a (randomly picked) driver is drunk, given that the breathalyzer test is positive. Formally, this probability can be calculated using Bayes' theorem, as shown above. However, there are different ways of presenting the relevant information. Consider the following, formally equivalent variant of the problem:

 1 out of 1000 drivers are driving drunk. The breathalyzers never fail to detect a truly drunk person. For 50 out of the 999 drivers who are not drunk the breathalyzer falsely displays drunkenness. Suppose the policemen then stop a driver at random, and force them to take a breathalyzer test. It indicates that they are drunk. We assume you don't know anything else about them. How high is the probability they really are drunk?

In this case, the relevant numerical information—p(drunk), p(D | drunk), p(D | sober)—is presented in terms of natural frequencies with respect to a certain reference class (see reference class problem). Empirical studies show that people's inferences correspond more closely to Bayes' rule when information is presented this way, helping to overcome base-rate neglect in laypeople [15] and experts. [16] As a consequence, organizations like the Cochrane Collaboration recommend using this kind of format for communicating health statistics. [17] Teaching people to translate these kinds of Bayesian reasoning problems into natural frequency formats is more effective than merely teaching them to plug probabilities (or percentages) into Bayes' theorem. [18] It has also been shown that graphical representations of natural frequencies (e.g., icon arrays) help people to make better inferences. [18] [19] [20]

Why are natural frequency formats helpful? One important reason is that this information format facilitates the required inference because it simplifies the necessary calculations. This can be seen when using an alternative way of computing the required probability p(drunk|D):

where N(drunk D) denotes the number of drivers that are drunk and get a positive breathalyzer result, and N(D) denotes the total number of cases with a positive breathalyzer result. The equivalence of this equation to the above one follows from the axioms of probability theory, according to which N(drunk D) = N × p (D | drunk) × p (drunk). Importantly, although this equation is formally equivalent to Bayes' rule, it is not psychologically equivalent. Using natural frequencies simplifies the inference because the required mathematical operation can be performed on natural numbers, instead of normalized fractions (i.e., probabilities), because it makes the high number of false positives more transparent, and because natural frequencies exhibit a "nested-set structure". [21] [22]

Not every frequency format facilitates Bayesian reasoning. [22] [23] Natural frequencies refer to frequency information that results from natural sampling, [24] which preserves base rate information (e.g., number of drunken drivers when taking a random sample of drivers). This is different from systematic sampling, in which base rates are fixed a priori (e.g., in scientific experiments). In the latter case it is not possible to infer the posterior probability p (drunk | positive test) from comparing the number of drivers who are drunk and test positive compared to the total number of people who get a positive breathalyzer result, because base rate information is not preserved and must be explicitly re-introduced using Bayes' theorem.

See also

Related Research Articles

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the incorrect belief that if a particular event occurs more frequently than normal during the past it is less likely to happen in the future, when it has otherwise been established that the probability of such events does not depend on what has happened in the past. Such events, having the quality of historical independence, are referred to as statistically independent. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the usual number of sixes.

Entropy (information theory) Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent in the variable's possible outcomes. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is sometimes called Shannon entropy in his honour. As an example, consider a biased coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is for p = 1/2, when there is no reason to expect one outcome over another, and in this case a coin flip has an entropy of one bit. The minimum surprise is when p = 0 or p = 1, when the event is known and the entropy is zero bits. Other values of p give different entropies between zero and one bits.

Simpsons paradox A phenomenon in probability and statistics, in which a trend appears in groups of data but disappears when these groups are combined

Simpson's paradox, which also goes by several other names, is a phenomenon in probability and statistics, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined. This result is often encountered in social-science and medical-science statistics and is particularly problematic when frequency data is unduly given causal interpretations. The paradox can be resolved when causal relations are appropriately addressed in the statistical modeling. It is also referred to as Simpson's reversal, Yule–Simpson effect, amalgamation paradox, or reversal paradox.

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.

Bayes theorem Probability based on prior knowledge

In probability theory and statistics, Bayes' theorem, named after the Reverend Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately than simply assuming that the individual is typical of the population as a whole.

Prosecutors fallacy A fallacy of statistical reasoning typically used by a prosecutor to exaggerate the probability of a criminal defendants guilt

The prosecutor's fallacy is a fallacy of statistical reasoning involving a test for an occurrence, such as a DNA match. A positive result in the test may paradoxically be more likely to be an erroneous result than an actual occurrence, even if the test is very accurate. The fallacy is named because it is typically used by a prosecutor to exaggerate the probability of a criminal defendant's guilt. The fallacy can be used to support other claims as well – including the innocence of a defendant.

Sorites paradox Paradox that if removing one grain from a heap of sand leaves it a heap, then one grain of sand is also a heap

The sorites paradox is a paradox that arises from vague predicates. A typical formulation involves a heap of sand, from which grains are individually removed. Under the assumption that removing a single grain does not turn a heap into a non-heap, the paradox is to consider what happens when the process is repeated enough times: is a single remaining grain still a heap? If not, when did it change from a heap to a non-heap?

The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method or decision. The availability heuristic operates on the notion that if something can be recalled, it must be important, or at least more important than alternative solutions which are not as readily recalled. Subsequently, under the availability heuristic, people tend to heavily weigh their judgments toward more recent information, making new opinions biased toward that latest news.

Decision theory is the study of an agent's choices. Decision theory can be broken into two branches: normative decision theory, which analyzes the outcomes of decisions or determines the optimal decisions given constraints and assumptions, and descriptive decision theory, which analyzes how agents actually make the decisions they do.

The representativeness heuristic is used when making judgments about the probability of an event under uncertainty. It is one of a group of heuristics proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". Heuristics are described as "judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course." Heuristics are useful because they use effort-reduction and simplification in decision-making.

The conjunction fallacy is a formal fallacy that occurs when it is assumed that specific conditions are more probable than a single general one.

Given a population whose members each belong to one of a number of different sets or classes, a classification rule or classifier is a procedure by which the elements of the population set are each predicted to belong to one of the classes. A perfect classification is one for which every element in the population is assigned to the class it really belongs to. An imperfect classification is one in which some errors appear, and then statistical analysis must be applied to analyse the classification.

In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" that are false. Equivalently, the FDR is the expected ratio of the number of false positive classifications to the total number of positive classifications. The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP /. FDR-controlling procedures provide less stringent control of Type I errors compared to familywise error rate (FWER) controlling procedures, which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors.

Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption. More formally, P(A|B) is assumed to be approximately equal to P(B|A).

In statistics, when performing multiple comparisons, a false positive ratio is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events.

In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event has already occurred. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A|B), or sometimes PB(A) or P(A/B). For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell is coughing might be 75%, in which case we would have that P(Cough) = 5% and P(Cough|Sick) = 75%.

Heuristics are simple strategies or mental processes that humans, animals, organizations and machines use to quickly form judgments, make decisions, and find solutions to complex problems. This happens when an individual focuses on the most relevant aspects of a problem or situation to formulate a solution.

Jurimetrics is the application of quantitative methods, and often especially probability and statistics, to law. In the United States, the journal Jurimetrics is published by the American Bar Association and Arizona State University. The Journal of Empirical Legal Studies is another publication that emphasizes the statistical analysis of law.

Intuitive statistics, or folk statistics, refers to the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

Maya Bar-Hillel is a professor emeritus of psychology at the Hebrew University of Jerusalem. Known for her work on inaccuracies in human reasoning about probability, she has also studied decision theory in connection with Newcomb's paradox, investigated how gender stereotyping can block human problem-solving, and worked with Dror Bar-Natan, Gil Kalai, and Brendan McKay to debunk the Bible code.

References

  1. Welsh, Matthew B.; Navarro, Daniel J. (2012). "Seeing is believing: Priors, trust, and base rate neglect". Organizational Behavior and Human Decision Processes. 119 (1): 1–14. doi:10.1016/j.obhdp.2012.04.001. hdl: 2440/41190 . ISSN   0749-5978.
  2. "Logical Fallacy: The Base Rate Fallacy". Fallacyfiles.org. Retrieved 2013-06-15.
  3. Rheinfurth, M. H.; Howell, L. W. (March 1998). Probability and Statistics in Aerospace Engineering (PDF). NASA. p. 16. MESSAGE: False positive tests are more probable than true positive tests when the overall population has a low prevalence of the disease. This is called the false-positive paradox.
  4. 1 2 Vacher, H. L. (May 2003). "Quantitative literacy - drug testing, cancer screening, and the identification of igneous rocks". Journal of Geoscience Education: 2. At first glance, this seems perverse: the less the students as a whole use steroids, the more likely a student identified as a user will be a non-user. This has been called the False Positive Paradox - Citing: Gonick, L.; Smith, W. (1993). The cartoon guide to statistics. New York: Harper Collins. p. 49.
  5. Madison, B. L. (August 2007). "Mathematical Proficiency for Citizenship". In Schoenfeld, A. H. (ed.). Assessing Mathematical Proficiency. Mathematical Sciences Research Institute Publications (New ed.). Cambridge University Press. p. 122. ISBN   978-0-521-69766-8. The correct [probability estimate...] is surprising to many; hence, the term paradox.
  6. Bar-Hillel, Maya (1980). "The base-rate fallacy in probability judgments" (PDF). Acta Psychologica. 44 (3): 211–233. doi:10.1016/0001-6918(80)90046-3.
  7. 1 2 3 Kahneman, Daniel; Amos Tversky (1973). "On the psychology of prediction". Psychological Review. 80 (4): 237–251. doi:10.1037/h0034747. S2CID   17786757.
  8. Kahneman, Daniel; Amos Tversky (1985). "Evidential impact of base rates". In Daniel Kahneman, Paul Slovic & Amos Tversky (ed.). Judgment under uncertainty: Heuristics and biases. Science. 185. pp. 153–160. doi:10.1126/science.185.4157.1124. PMID   17835457. S2CID   143452957.
  9. Kahneman, Daniel (2000). "Evaluation by moments, past and future". In Daniel Kahneman and Amos Tversky (ed.). Choices, Values and Frames.
  10. Nisbett, Richard E.; E. Borgida; R. Crandall; H. Reed (1976). "Popular induction: Information is not always informative". In J. S. Carroll & J. W. Payne (ed.). Cognition and social behavior. 2. pp. 227–236.
  11. Koehler, J. J. (2010). "The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges". Behavioral and Brain Sciences. 19: 1–17. doi:10.1017/S0140525X00041157. S2CID   53343238.
  12. Barbey, A. K.; Sloman, S. A. (2007). "Base-rate respect: From ecological rationality to dual processes". Behavioral and Brain Sciences. 30 (3): 241–254, discussion 255–297. doi:10.1017/S0140525X07001653. PMID   17963533. S2CID   31741077.
  13. Tversky, A.; Kahneman, D. (1974). "Judgment under Uncertainty: Heuristics and Biases". Science. 185 (4157): 1124–1131. Bibcode:1974Sci...185.1124T. doi:10.1126/science.185.4157.1124. PMID   17835457. S2CID   143452957.
  14. Cosmides, Leda; John Tooby (1996). "Are humans good intuitive statisticians after all? Rethinking some conclusions of the literature on judgment under uncertainty". Cognition. 58: 1–73. CiteSeerX   10.1.1.131.8290 . doi:10.1016/0010-0277(95)00664-8. S2CID   18631755.
  15. 1 2 Gigerenzer, G.; Hoffrage, U. (1995). "How to improve Bayesian reasoning without instruction: Frequency formats". Psychological Review. 102 (4): 684. CiteSeerX   10.1.1.128.3201 . doi:10.1037/0033-295X.102.4.684.
  16. Hoffrage, U.; Lindsey, S.; Hertwig, R.; Gigerenzer, G. (2000). "Medicine: Communicating Statistical Information". Science. 290 (5500): 2261–2262. doi:10.1126/science.290.5500.2261. PMID   11188724. S2CID   33050943.
  17. Akl, E. A.; Oxman, A. D.; Herrin, J.; Vist, G. E.; Terrenato, I.; Sperati, F.; Costiniuk, C.; Blank, D.; Schünemann, H. (2011). Schünemann, Holger (ed.). "Using alternative statistical formats for presenting risks and risk reductions". The Cochrane Database of Systematic Reviews (3): CD006776. doi:10.1002/14651858.CD006776.pub2. PMC   6464912 . PMID   21412897.
  18. 1 2 Sedlmeier, P.; Gigerenzer, G. (2001). "Teaching Bayesian reasoning in less than two hours". Journal of Experimental Psychology: General. 130 (3): 380. doi:10.1037/0096-3445.130.3.380. hdl: 11858/00-001M-0000-0025-9504-E .
  19. Brase, G. L. (2009). "Pictorial representations in statistical reasoning". Applied Cognitive Psychology. 23 (3): 369–381. doi:10.1002/acp.1460. S2CID   18817707.
  20. Edwards, A.; Elwyn, G.; Mulley, A. (2002). "Explaining risks: Turning numerical data into meaningful pictures". BMJ. 324 (7341): 827–830. doi:10.1136/bmj.324.7341.827. PMC   1122766 . PMID   11934777.
  21. Girotto, V.; Gonzalez, M. (2001). "Solving probabilistic and statistical problems: A matter of information structure and question form". Cognition. 78 (3): 247–276. doi:10.1016/S0010-0277(00)00133-5. PMID   11124351. S2CID   8588451.
  22. 1 2 Hoffrage, U.; Gigerenzer, G.; Krauss, S.; Martignon, L. (2002). "Representation facilitates reasoning: What natural frequencies are and what they are not". Cognition. 84 (3): 343–352. doi:10.1016/S0010-0277(02)00050-1. PMID   12044739. S2CID   9595672.
  23. Gigerenzer, G.; Hoffrage, U. (1999). "Overcoming difficulties in Bayesian reasoning: A reply to Lewis and Keren (1999) and Mellers and McGraw (1999)". Psychological Review. 106 (2): 425. doi:10.1037/0033-295X.106.2.425. hdl: 11858/00-001M-0000-0025-9CB4-8 .
  24. Kleiter, G. D. (1994). "Natural Sampling: Rationality without Base Rates". Contributions to Mathematical Psychology, Psychometrics, and Methodology. Recent Research in Psychology. pp. 375–388. doi:10.1007/978-1-4612-4308-3_27. ISBN   978-0-387-94169-1.