Texas sharpshooter fallacy

Last updated

The Texas sharpshooter fallacy is an informal fallacy which is committed when differences in data are ignored, but similarities are overemphasized. From this reasoning, a false conclusion is inferred. [1] This fallacy is the philosophical or rhetorical application of the multiple comparisons problem (in statistics) and apophenia (in cognitive psychology). It is related to the clustering illusion, which is the tendency in human cognition to interpret patterns where none actually exist.

Contents

The name comes from a metaphor about a person from Texas who fires a gun at the side of a barn, then paints a shooting target centered on the tightest cluster of shots and claims to be a sharpshooter. [2] [3] [4]

Structure

A set of 100 randomly generated points displayed on a scatter graph. Examining the points, it is easy to identify apparent patterns. In particular, rather than spreading out evenly, it is not uncommon for random data points to form clusters, giving the (false) impression of "hot spots" created by some underlying cause. Texas Sharpshooter Fallacy illustration.png
A set of 100 randomly generated points displayed on a scatter graph. Examining the points, it is easy to identify apparent patterns. In particular, rather than spreading out evenly, it is not uncommon for random data points to form clusters, giving the (false) impression of "hot spots" created by some underlying cause.

The Texas sharpshooter fallacy often arises when a person has a large amount of data at their disposal but only focuses on a small subset of that data. Some factor other than the one attributed may give all the elements in that subset some kind of common property (or pair of common properties, when arguing for correlation). If the person attempts to account for the likelihood of finding some subset in the large data with some common property by a factor other than its actual cause, then that person is likely committing a Texas sharpshooter fallacy.

The fallacy is characterized by a lack of a specific hypothesis prior to the gathering of data, or the formulation of a hypothesis only after data have already been gathered and examined. [5] Thus, it typically does not apply if one had an ex ante , or prior, expectation of the particular relationship in question before examining the data. For example, one might, prior to examining the information, have in mind a specific physical mechanism implying the particular relationship. One could then use the information to give support or cast doubt on the presence of that mechanism. Alternatively, if a second set of additional information can be generated using the same process as the original information, one can use the first (original) set of information to construct a hypothesis, and then test the hypothesis on the second (new) set of information. (See hypothesis testing.) However, after constructing a hypothesis on a set of data, one would be committing the Texas sharpshooter fallacy if they then tested that hypothesis on the same data (see hypotheses suggested by the data).

Examples

A Swedish study in 1992 tried to determine whether power lines caused some kind of poor health effects. [6] The researchers surveyed people living within 300 meters of high-voltage power lines over 25 years and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence of childhood leukemia was four times higher among those who lived closest to the power lines, and it spurred calls to action by the Swedish government. [7] The problem with the conclusion, however, was that the number of potential ailments, i.e., over 800, was so large that it created a high probability that at least one ailment would exhibit the appearance of a statistically significant difference by chance alone, a situation known as the multiple comparisons problem. Subsequent studies failed to show any association between power lines and childhood leukemia. [8]

The fallacy is often found in modern-day interpretations of the quatrains of Nostradamus. Nostradamus's quatrains are often liberally translated from their original (archaic) French versions, in which their historical context is often lost, and then applied to support the erroneous conclusion that Nostradamus predicted a given modern-day event after the event actually occurred. [9]

See also

Related Research Articles

The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed association or correlation between them. The idea that "correlation implies causation" is an example of a questionable-cause logical fallacy, in which two events occurring together are taken to have established a cause-and-effect relationship. This fallacy is also known by the Latin phrase cum hoc ergo propter hoc. This differs from the fallacy known as post hoc ergo propter hoc, in which an event following another is seen as a necessary consequence of the former event, and from conflation, the errant merging of two events, ideas, databases, etc., into one.

<span class="mw-page-title-main">Fallacy</span> Argument that uses faulty reasoning

A fallacy, is the use of invalid or otherwise faulty reasoning in the construction of an argument that may appear to be well-reasoned if unnoticed. The term was introduced in the Western intellectual tradition by the Aristotelian De Sophisticis Elenchis.

Post hoc ergo propter hoc is an informal fallacy that states: "Since event Y followed event X, event Y must have been caused by event X." It is often shortened simply to post hoc fallacy. A logical fallacy of the questionable cause variety, it is subtly different from the fallacy cum hoc ergo propter hoc, in which two events occur simultaneously or the chronological ordering is insignificant or unknown. Post hoc is a logical fallacy in which one event seems to be the cause of a later event because it occurred earlier.

A faulty generalization is an informal fallacy wherein a conclusion is drawn about all or many instances of a phenomenon on the basis of one or a few instances of that phenomenon. It is similar to a proof by example in mathematics. It is an example of jumping to conclusions. For example, one may generalize about all people or all members of a group from what one knows about just one or a few people:

The questionable cause—also known as causal fallacy, false cause, or non causa pro causa —is a category of informal fallacies in which a cause is incorrectly identified.

Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.

In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning would be involved: something seems true in the limited data set; therefore we hypothesize that it is true in general; therefore we wrongly test it on the same, limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing.

Anecdotal evidence is evidence based only on personal observation, collected in a casual or non-systematic manner.

Apophenia is the tendency to perceive meaningful connections between unrelated things. The term was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations. Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling.

Statistics, when used in a misleading fashion, can trick the casual observer into believing something other than what the data shows. That is, a misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy.

<span class="mw-page-title-main">Data dredging</span> Misuse of data analysis

Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.

Postdiction involves explanation after the fact. In skepticism, it is considered an effect of hindsight bias that explains claimed predictions of significant events such as plane crashes and natural disasters. In religious contexts, theologians frequently refer to postdiction using the Latin term vaticinium ex eventu. Through this term, skeptics postulate that many biblical prophecies appearing to have come true may have been written after the events supposedly predicted, or that the text or interpretation may have been modified after the event to fit the facts as they occurred.

<span class="mw-page-title-main">Survivorship bias</span> Logical error, form of selection bias

Survivorship bias or survival bias is the logical error of concentrating on entities that passed a selection process while overlooking those that did not. This can lead to incorrect conclusions because of incomplete data.

Precision bias also known as numeracy bias is a form of cognitive bias in which an evaluator of information commits a logical fallacy as the result of confusing accuracy and precision. More particularly, in assessing the merits of an argument, a measurement, or a report, an observer or assessor falls prey to precision bias when they believe that greater precision implies greater accuracy ; the observer or assessor are said to provide false precision.

Jumping to conclusions is a psychological term referring to a communication obstacle where one "judge[s] or decide[s] something without having all the facts; to reach unwarranted conclusions". In other words, "when I fail to distinguish between what I observed first hand from what I have only inferred or assumed". Because it involves making decisions without having enough information to be sure that one is right, this can give rise to poor or rash decisions that often cause more harm to something than good.

The "hot hand" is a phenomenon, previously considered a cognitive social bias, that a person who experiences a successful outcome has a greater chance of success in further attempts. The concept is often applied to sports and skill-based tasks in general and originates from basketball, where a shooter is more likely to score if their previous attempts were successful; i.e., while having the "hot hand.” While previous success at a task can indeed change the psychological attitude and subsequent success rate of a player, researchers for many years did not find evidence for a "hot hand" in practice, dismissing it as fallacious. However, later research questioned whether the belief is indeed a fallacy. Some recent studies using modern statistical analysis have observed evidence for the "hot hand" in some sporting activities; however, other recent studies have not observed evidence of the "hot hand". Moreover, evidence suggests that only a small subset of players may show a "hot hand" and, among those who do, the magnitude of the "hot hand" tends to be small.

The look-elsewhere effect is a phenomenon in the statistical analysis of scientific experiments where an apparently statistically significant observation may have actually arisen by chance because of the sheer size of the parameter space to be searched.

HARKing is an acronym coined by social psychologist Norbert Kerr that refers to the questionable research practice of "presenting a post hoc hypothesis in the introduction of a research report as if it were an a priori hypothesis". Hence, a key characteristic of HARKing is that post hoc hypothesizing is falsely portrayed as a priori hypothesizing. HARKing may occur when a researcher tests an a priori hypothesis but then omits that hypothesis from their research report after they find out the results of their test; inappropriate forms of post hoc analysis or post hoc theorizing then may lead to a post hoc hypothesis.

References

  1. Bennett, Bo, "Texas sharpshooter fallacy", Logically Fallacious, retrieved 21 October 2014, description: ignoring the difference while focusing on the similarities, thus coming to an inaccurate conclusion
  2. Barry Popik (2013-03-09). "Texas Sharpshooter Fallacy". barrypopik.com. Retrieved 2015-11-10.
  3. Atul Gawande (1999-08-02). "The cancer-cluster myth" (PDF). The New Yorker. Retrieved 2009-10-10.
  4. Carroll, Robert Todd (2003). The Skeptic's Dictionary: a collection of strange beliefs, amusing deceptions, and dangerous delusions. John Wiley & Sons. p. 375. ISBN   0-471-27242-6 . Retrieved 2012-03-25. The term refers to the story of the Texan who shoots holes in the side of a barn and then draws a bull's-eye around the bullet holes
  5. Thompson, William C. (July 18, 2009). "Painting the target around the matching profile: the Texas sharpshooter fallacy in forensic DNA interpretation". Law, Probability, & Risk. 8 (3): 257–258. doi: 10.1093/lpr/mgp013 . Retrieved 2012-03-25. Texas sharpshooter fallacy...this article demonstrates how post hoc target shifting occurs and how it can distort the frequency and likelihood ratio statistics used to characterize DNA matches, making matches appear more probative than they actually are.
  6. Feychting, M.; Ahlbom, A. (1993-10-01). "Magnetic fields and cancer in children residing near Swedish high-voltage power lines". American Journal of Epidemiology . 138 (7): 467–481. doi:10.1093/oxfordjournals.aje.a116881. ISSN   0002-9262. PMID   8213751.
  7. Coghlan, Andy. "Swedish studies pinpoint power line cancer link". New Scientist.
  8. "Frontline: previous reports: transcripts: currents of fear". PBS. 1995-06-13. Archived from the original on 2016-02-03. Retrieved 2012-07-03.
  9. "Nostradamus Predicted 9/11?". snopes.com. 12 September 2001. Retrieved 2012-07-03.