In communication studies, science communication, psycholinguistics and choice theory, anecdotal value refers to the primarily social and political value of an anecdote or anecdotal evidence in promoting understanding of a social, cultural, or economic phenomenon. While anecdotal evidence is typically unscientific, in the last several decades the evaluation of anecdotes has received sustained academic scrutiny from economists and scholars such as Felix Salmon, [1] S. G. Checkland (on David Ricardo), Steven Novella, R. Charleton, Hollis Robbins, [2] Kwamena Kwansah-Aidoo, and others. These academics seek to quantify the value of the use of anecdotes, e.g. in promoting public awareness of a disease. More recently, economists studying choice models have begun assessing anecdotal value in the context of framing; Daniel Kahneman and Amos Tversky suggest that choice models may be contingent on stories or anecdotes that frame or influence choice. [3] As an example, consider the quote, widely misattributed to Joseph Stalin: The death of one man is a tragedy, the death of millions is a statistic. [4] [5]
The term Homo economicus, or economic man, is the portrayal of humans as agents who are consistently rational and narrowly self-interested, and who pursue their subjectively defined ends optimally. It is a wordplay on Homo sapiens, used in some economic theories and in pedagogy.
Daniel Kahneman was an Israeli-American psychologist best known for his work on the psychology of judgment and decision-making as well as behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences together with Vernon L. Smith. Kahneman's published empirical findings challenge the assumption of human rationality prevailing in modern economic theory. Kahneman became known as the "grandfather of behavioral economics."
Amos Nathan Tversky was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.
Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.
An anecdotal evidence is a piece of evidence based on descriptions and reports of individual, personal experiences, or observations, collected in a non-systematic manner.
The conjunction fallacy is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.
The simulation heuristic is a psychological heuristic, or simplified mental strategy, according to which people determine the likelihood of an event based on how easy it is to picture the event mentally. Partially as a result, people experience more regret over outcomes that are easier to imagine, such as "near misses". The simulation heuristic was first theorized by psychologists Daniel Kahneman and Amos Tversky as a specialized adaptation of the availability heuristic to explain counterfactual thinking and regret. However, it is not the same as the availability heuristic. Specifically the simulation heuristic is defined as "how perceivers tend to substitute normal antecedent events for exceptional ones in psychologically 'undoing' this specific outcome."
The Allais paradox is a choice problem designed by Maurice Allais to show an inconsistency of actual observed choices with the predictions of expected utility theory. The Allais paradox demonstrates that individuals rarely make rational decisions consistently when required to do so immediately. The independence axiom of expected utility theory, which requires that the preferences of an individual should not change when altering two lotteries by equal proportions, was proven to be violated by the paradox.
The disposition effect is an anomaly discovered in behavioral finance. It relates to the tendency of investors to sell assets that have increased in value, while keeping assets that have dropped in value.
The rank-dependent expected utility model is a generalized expected utility model of choice under uncertainty, designed to explain the behaviour observed in the Allais paradox, as well as for the observation that many people both purchase lottery tickets and insure against losses.
In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.
Reference class forecasting or comparison class forecasting is a method of predicting the future by looking at similar past situations and their outcomes. The theories behind reference class forecasting were developed by Daniel Kahneman and Amos Tversky. The theoretical work helped Kahneman win the Nobel Prize in Economics.
Barbara Tversky is an American psychologist. She is a professor emerita of psychology at Stanford University and a professor of psychology and education at Teachers College, Columbia University. Tversky specializes in cognitive psychology.
Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.
Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.
The certainty effect is the psychological effect resulting from the reduction of probability from certain to probable. It is an idea introduced in prospect theory.
Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman. The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical.
Insensitivity to sample size is a cognitive bias that occurs when people judge the probability of obtaining a sample statistic without respect to the sample size. For example, in one study, subjects assigned the same probability to the likelihood of obtaining a mean height of above six feet [183 cm] in samples of 10, 100, and 1,000 men. In other words, variation is more likely in smaller samples, but people may not expect this.
Extension neglect is a type of cognitive bias which occurs when the sample size is ignored when its determination is relevant. For instance, when reading an article about a scientific study, extension neglect occurs when the reader ignores the number of people involved in the study but still makes inferences about a population based on the sample. In reality, if the sample size is too small, the results might risk errors in statistical hypothesis testing. A study based on only a few people may draw invalid conclusions because only one person has exceptionally high or low scores (outlier), and there are not enough people there to correct this via averaging out. But often, the sample size is not prominently displayed in science articles, and the reader in this case might still believe the article's conclusion due to extension neglect.
Eldar Shafir is an American behavioral scientist, and the co-author of Scarcity: Why Having Too Little Means So Much. He is the Class of 1987 Professor in Behavioral Science and Public Policy; Professor of Psychology and Public Affairs at Princeton University Department of Psychology and the Princeton School of Public and International Affairs, and Inaugural Director of Princeton’s Kahneman-Treisman Center for Behavioral Science and Public Policy.