The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. This heuristic, operating on the notion that, if something can be recalled, it must be important, or at least more important than alternative solutions not as readily recalled, [1] is inherently biased toward recently acquired information. [2] [3]
The mental availability of an action's consequences is positively related to those consequences' perceived magnitude. In other words, the easier it is to recall the consequences of something, the greater those consequences are often perceived to be. Most notably, people often rely on the content of their recall if its implications are not called into question by the difficulty they have in recalling it. [4]
In the late 1960s and early 1970s, Amos Tversky and Daniel Kahneman began work on a series of papers examining "heuristic and biases" used in judgment under uncertainty. Prior to that, the predominant view in the field of human judgment was that humans are rational actors. Kahneman and Tversky explained that judgment under uncertainty often relies on a limited number of simplifying heuristics rather than extensive algorithmic processing. Soon, this idea spread beyond academic psychology, into law, medicine, and political science. This research questioned the descriptive adequacy of idealized models of judgment, and offered insights into the cognitive processes that explained human error without invoking motivated irrationality. [5] One simplifying strategy people may rely on is the tendency to make a judgment about the frequency of an event based on how many similar instances are brought to mind. In 1973, Amos Tversky and Daniel Kahneman first studied this phenomenon and labeled it the "availability heuristic". An availability heuristic is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. As follows, people tend to use a readily available fact to base their beliefs on a comparably distant concept. There has been much research done with this heuristic, but studies on the issue are still questionable with regard to the underlying process. Studies illustrate that manipulations intended to increase the subjective experience of ease of recall are also likely to affect the amount of recall. Furthermore, this makes it difficult to determine whether the obtained estimates of frequency, likelihood, or typicality are based on participants' phenomenal experiences or on a biased sample of recalled information. [5]
However, some textbooks have chosen the latter interpretation introducing the availability heuristic as "one's judgments are always based on what comes to mind".[ citation needed ] For example, if a person is asked whether there are more words in the English language that start with a k or have k as the third letter, the person will probably be able to think of more words that begin with the letter k, concluding incorrectly that k is more frequent as the first letter than the third. [6] In this Wikipedia article itself, for example, there are multiple instances of words such as "likely", "make", "take", "ask" and indeed "Wikipedia", but (aside from names) only a couple of initial K's: "know" and "key".
Chapman (1967) described a bias in the judgment of the frequency with which two events co-occur. This demonstration showed that the co-occurrence of paired stimuli resulted in participants overestimating the frequency of the pairings. [7] To test this idea, participants were given information about several hypothetical mental patients. The data for each patient consisted of a clinical diagnosis and a drawing made by the patient. Later, participants estimated the frequency with which each diagnosis had been accompanied by various features of the drawing. The subjects vastly overestimated the frequency of this co-occurrence (such as suspiciousness and peculiar eyes). This effect was labeled the illusory correlation. Tversky and Kahneman suggested that availability provides a natural account for the illusory-correlation effect. The strength of the association between two events could provide the basis for the judgment of how frequently the two events co-occur. When the association is strong, it becomes more likely to conclude that the events have been paired frequently. Strong associations will be thought of as having occurred together frequently. [8]
In Tversky & Kahneman's first examination of availability heuristics, subjects were asked, "If a random word is taken from an English text, is it more likely that the word starts with a K, or that K is the third letter?" They argue that English-speaking people would immediately think of many words that begin with the letter "K" (kangaroo, kitchen, kale), but that it would take a more concentrated effort to think of any words in which "K" is the third letter (acknowledge, ask). Results indicated that participants overestimated the number of words that began with the letter "K" and underestimated the number of words that had "K" as the third letter. Tversky and Kahneman concluded that people answer questions like these by comparing the availability of the two categories and assessing how easily they can recall these instances. In other words, it is easier to think of words that begin with "K", more than words with "K" as the third letter. Thus, people judge words beginning with a "K" to be a more common occurrence. In reality, however, a typical text contains twice as many words that have "K" as the third letter than "K" as the first letter. [8]
In Tversky and Kahneman's seminal paper, they include findings from several other studies, which also show support for the availability heuristic. Apart from their findings in the "K" study, they also found:
Many researchers have attempted to identify the psychological process which creates the availability heuristic.
Tversky and Kahneman argue that the number of examples recalled from memory is used to infer the frequency with which such instances occur. In an experiment to test this explanation, participants listened to lists of names containing either 19 famous women and 20 less famous men or 19 famous men and 20 less famous women. Subsequently, some participants were asked to recall as many names as possible whereas others were asked to estimate whether male or female names were more frequent on the list. The names of the famous celebrities were recalled more frequently compared to those of the less famous celebrities. The majority of the participants incorrectly judged that the gender associated with more famous names had been presented more often than the gender associated with less famous names. Tversky and Kahneman argue that although the availability heuristic is an effective strategy in many situations when judging probability, use of this heuristic can lead to predictable patterns of errors. [8]
Schwarz and his colleagues, on the other hand, proposed the ease of retrieval explanation, wherein the ease with which examples come to mind, not the number of examples, is used to infer the frequency of a given class. In a study by Schwarz and colleagues to test their explanation, participants were asked to recall either six or twelve examples of their assertive or very unassertive behavior. Participants were later asked to rate their own assertiveness. Pretesting had indicated that although most participants were capable of generating twelve examples, this was a difficult task. The results indicated that participants rated themselves as more assertive after describing six examples of assertive compared with unassertive behavior, but rated themselves as less assertive after describing twelve examples of assertive compared with unassertive behavior. The study reflected that the extent to which recalled content impacted judgment was determined by the ease with which the content could be brought to mind (it was easier to recall 6 examples than 12), rather than the amount of content brought to mind. [4]
Research by Vaughn (1999) looked at the effects of uncertainty on the use of the availability heuristic. College students were asked to list either three or eight different study methods they could use in order to get an A on their final exams. The researchers also manipulated the time during the semester they would ask the students to complete the questionnaire. Approximately half of the participants were asked for their study methods during the third week of classes, and the other half were asked on the last day of classes. Next, participants were asked to rate how likely they would be to get an A in their easiest and hardest classes. Participants were then asked to rank the difficulty they experienced in recalling the examples they had previously listed. The researchers hypothesized that students would use the availability heuristic, based on the number of study methods they listed, to predict their grade only when asked at the beginning of the semester and about their hardest final. Students were not expected to use the availability heuristic to predict their grades at the end of the semester or about their easiest final. The researchers predicted this use of the availability heuristic because participants would be uncertain about their performance throughout the semester. The results indicated that students used the availability heuristic, based on the ease of recall of the study methods they listed, to predict their performance when asked at the beginning of the semester and about their hardest final. If the student listed only three study methods, they predicted a higher grade at the end of the semester only on their hardest final. If students listed eight study methods, they had a harder time recalling the methods and thus predicted a lower final grade on their hardest final. The results were not seen in the easy final condition because the students were certain they would get an A, regardless of the study method. The results supported this hypothesis and gave evidence to the fact that levels of uncertainty affect the use of the availability heuristic. [9]
After seeing news stories about child abductions, people may judge that the likelihood of this event is greater. Media coverage can help fuel a person's example bias with widespread and extensive coverage of unusual events, such as homicide or airline accidents, and less coverage of more routine, less sensational events, such as common diseases or car accidents. For example, when asked to rate the probability of a variety of causes of death, people tend to rate "newsworthy" events as more likely because they can more readily recall an example from memory. [10] Moreover, unusual and vivid events like homicides, shark attacks, or lightning are more often reported in mass media than common and un-sensational causes of death like common diseases. [11]
For example, many people think that the likelihood of dying from shark attacks is greater than that of dying from being hit by falling airplane parts when more people actually die from falling airplane parts. [12] When a shark attack occurs, the deaths are widely reported in the media whereas deaths as a result of being hit by falling airplane parts are rarely reported in the media. [13]
In a 2010 study exploring how vivid television portrayals are used when forming social reality judgments, people watching vivid violent media gave higher estimates of the prevalence of crime and police immorality in the real world than those not exposed to vivid television. These results suggest that television violence does in fact have a direct causal impact on participants' social reality beliefs. Repeated exposure to vivid violence leads to an increase in people's risk estimates about the prevalence of crime and violence in the real world. [14] Counter to these findings, researchers from a similar study argued that these effects may be due to effects of new information. Researchers tested the new information effect by showing movies depicting dramatic risk events and measuring their risk assessment after the film. Contrary to previous research, there were no long-term effects on risk perception due to exposure to dramatic movies. However, the study did find evidence of idiosyncratic effects of the movies - that is, people reacted immediately after the movies with enhanced or diminished risk beliefs, which faded after a period of 10 days. [15]
Another measurable effect is the inaccurate estimation of the fraction of deaths caused by terrorism compared to homicides with other causes. [16]
Researchers examined the role of cognitive heuristics in the AIDS risk-assessment process. 331 physicians reported worry about on-the-job HIV exposure, and experience with patients who have HIV. By analyzing answers to questionnaires handed out, researchers concluded that availability of AIDS information did not relate strongly to perceived risk. [17]
Participants in a 1992 study read case descriptions of hypothetical patients who varied on their sex and sexual preference. These hypothetical patients showed symptoms that could have been caused by five different diseases (AIDS, leukemia, influenza, meningitis, or appendicitis). Participants were instructed to indicate which disease they thought the patient had and then they rated patient responsibility and interaction desirability. Consistent with the availability heuristic, either the more common (influenza) or the more publicized (AIDS) disease was chosen. [18]
One study sought to analyze the role of the availability heuristic in financial markets. Researchers defined and tested two aspects of the availability heuristic: [19]
On days of substantial stock market moves, abnormal stock price reactions to upgrades are weaker, than those to downgrades. These availability effects are still significant even after controlling for event-specific and company-specific factors. [19]
Similarly, research has pointed out that under the availability heuristic, humans are not reliable because they assess probabilities by giving more weight to current or easily recalled information instead of processing all relevant information. Since information regarding the current state of the economy is readily available, researchers attempted to expose the properties of business cycles to predict the availability bias in analysts' growth forecasts. They showed the availability heuristic to play a role in analysis of forecasts and influence investments because of this. [20]
In effect, investors are using the availability heuristic to make decisions and subsequently, may be obstructing their own investment success. An investor's lingering perceptions of a dire market environment may be causing them to view investment opportunities through an overly negative lens, making it less appealing to consider taking on investment risk, no matter how small the returns on perceived "safe" investments. To illustrate, Franklin Templeton's annual Global Investor Sentiment Survey 1 asked individuals how they believed the S&P 500 Index performed in 2009, 2010, and 2011. 66 percent of respondents stated that they believed the market was either flat or down in 2009, 48 percent said the same about 2010 and 53 percent also said the same about 2011. In reality, the S&P 500 saw 26.5 percent annual returns in 2009, 15.1 percent annual returns in 2010, and 2.1 percent annual returns in 2011, meaning lingering perceptions based on dramatic, painful events are impacting decision-making even when those events are over. [21]
Additionally, a study by Hayibor and Wasieleski found that the availability of others who believe that a particular act is morally acceptable is positively related to others' perceptions of the morality of that act. This suggests that availability heuristic also has an effect on ethical decision making and ethical behavior in organizations. [22]
A study done by Craig R. Fox provides an example of how availability heuristics can work in the classroom. In this study, Fox tests whether the difficulty of recall influences judgment, specifically with course evaluations among college students. In his study he had two groups complete a course evaluation form. He asked the first group to write two recommended improvements for the course (a relatively easy task) and then write two positives about the class. The second group was asked to write ten suggestions where the professor could improve (a relatively difficult task) and then write two positive comments about the course. At the end of the evaluation, both groups were asked to rate the course on a scale from one to seven. The results showed that students asked to write ten suggestions (difficult task) rated the course less harshly because it was more difficult for them to recall the information. Most of the students in the group that was asked to fill in 10 suggestions didn't fill in more than two being unable to recall more instances where they were unsatisfied with the class. Students asked to do the easier evaluation with only two complaints had less difficulty in terms of availability of information, so they rated the course more harshly. [23]
Another study by Marie Geurten sought to test the availability heuristic in young children. Children of varying ages (from 4 to 8 years old) were tasked with generating a list of names, with some being asked for a shorter list and some for a longer list. The study then assessed the children's own impressions of their ability to recall names. Those children who were tasked with generating a shorter list had a higher perception of their ability to recall names than those who were tasked with generating a longer list. According to the study, this suggests that the children based their assessment of their recall abilities on their subjective experience of ease of recall. [24]
The media usually focuses on violent or extreme cases, which are more readily available in the public's mind. This may come into play when it is time for the judicial system to evaluate and determine the proper punishment for a crime. In one study, respondents rated how much they agreed with hypothetical laws and policies such as "Would you support a law that required all offenders convicted of unarmed muggings to serve a minimum prison term of two years?" Participants then read cases and rated each case on several questions about punishment. As hypothesized, respondents recalled more easily from long-term memory stories that contain severe harm, which seemed to influence their sentencing choices to make them push for harsher punishments. This can be eliminated by adding high concrete or high contextually distinct details into the crime stories about less severe injuries. [25]
A similar study asked jurors and college students to choose sentences on four severe criminal cases in which prison was a possible but not an inevitable sentencing outcome. Respondents answering questions about court performance on a public opinion formulated a picture of what the courts do and then evaluated the appropriateness of that behavior. Respondents recalled public information about crime and sentencing. This type of information is incomplete because the news media present a highly selective and non-representative selection of crime, focusing on the violent and extreme, rather than the ordinary. This makes most people think that judges are too lenient. But, when asked to choose the punishments, the sentences given by students were equal to or less severe than those given by judges. In other words, the availability heuristic made people believe that judges and jurors were too lenient in the courtroom, but the participants gave similar sentences when placed in the position of the judge, suggesting that the information they recalled was not correct. [26]
Researchers in 1989 predicted that mock jurors would rate a witness to be more deceptive if the witness testified truthfully before lying than when the witness was caught lying first before telling the truth. If the availability heuristic played a role in this, lying second would remain in jurors' minds (since it was more recent) and they would most likely remember the witness lying over the truthfulness. To test the hypothesis, 312 university students played the roles of mock jurors and watched a videotape of a witness presenting testimony during a trial. Results confirmed the hypothesis, as mock jurors were most influenced by the most recent act. [27]
Previous studies have indicated that explaining a hypothetical event makes the event seem more likely through the creation of causal connections. However, such effects could arise through the use of the availability heuristic; that is, subjective likelihood is increased by an event becoming easier to imagine. [28]
A study done asked those participating to pick between two illnesses. Those doing the study wanted to know which disease they thought was more likely to cause death. In the study, they asked participants to choose between a stroke and asthma as to which one someone was more likely to die from. The researchers concluded that it depended on what experiences were available to them. If they knew someone or heard of someone that died from one of the diseases that is the one they perceived to be a higher risk to die from. [29]
Two studies with 108 undergraduates investigated vivid information and its impact on social judgment and the availability heuristic and its role in mediating vividness effects.
In study 1, Subjects listened to a tape recording that described a woman who lived with her 7-year-old son. Subjects then heard arguments about the woman's fitness as a parent and were asked to draw their own conclusions regarding her fitness or unfitness. The concrete and colorful language were found to influence judgments about the woman's fitness as a mother.
In study 2, a series of male and female names were presented to subjects; for each name, subjects were told the university affiliation of the individual (Yale or Stanford). When some names were presented, subjects were simultaneously shown a photograph that purportedly portrayed the named individual. Subsequently, to assess what subjects could remember (as a measure of availability), each name was represented, as well as the appropriate photograph if one had been originally presented. The study considered whether the display or non-display of photographs biased subjects' estimates as to the percentage of Yale (vs Stanford) students in the sample of men and women whose names appeared on the original list, and whether these estimated percentages were causally related to the respondents' memory for the college affiliations of the individual students on the list. The presence of photographs affected judgments about the proportion of male and female students at the two universities. Such effects have typically been attributed to the ready accessibility of vividly presented information in memory—that is, to the availability heuristic.
In both studies, vividness affected both availability (ability to recall) and judgments. However, causal modeling results indicated that the availability heuristic did not play a role in the judgment process. [30]
In general, availability is correlated with ecological frequency, but it is also affected by other factors. Consequently, the reliance on the availability heuristic leads to systematic biases. Such biases are demonstrated in the judged frequency of classes of words, of combinatoric outcomes, and of repeated events. The phenomenon of illusory correlation is explained as an availability bias. [8]
In the original Tversky and Kahneman (1973) research, three major factors that are discussed are the frequency of repetition, frequency of co-occurrence, and illusory correlation. The use of frequency of repetition aids in the retrieval of relevant instances. The idea behind this phenomenon is that the more an instance is repeated within a category or list, the stronger the link between the two instances becomes. Individuals then use the strong association between the instances to determine the frequency of an instance. Consequently, the association between the category or list and the specific instance often influences frequency judgement. Frequency of co-occurrence strongly relates to Frequency of repetition, such that the more an item-pair is repeated, the stronger the association between the two items becomes, leading to a bias when estimating the frequency of co-occurrence. Due to the phenomena of frequency of co-occurrence, illusory correlations also often play a big role. [8]
Another factor that affects the availability heuristic in frequency and probability is exemplars. Exemplars are the typical examples that stand out during the process of recall. If asked what participants thought different set sizes were (how many men and how many women are in the class), participants would use exemplars to determine the size of each set. Participants would derive their answers on ease of recall of the names that stood out. Participants read a list of names of members of a class for 30 seconds, and then participants were asked the male to female ratio of the class. The participant's answer would depend on the recall of exemplars. If the participant reading the list recalled seeing more common male names, such as Jack, but the only female names in the class were uncommon names, such as Deepika, then the participant will recall that there were more men than women. The opposite would be true if there were more common female names on the list and uncommon male names. Due to the availability heuristic, names that are more easily available are more likely to be recalled, and can thus alter judgments of probability. [31]
Another example of the availability heuristic and exemplars would be seeing a shark in the ocean. Seeing a shark has a greater impact on an individual's memory than seeing a dolphin. If someone sees both sharks and dolphins in the ocean, they will be less aware of seeing the dolphins, because the dolphins had less of an impact on their memory. Due to the greater impact of seeing a shark, the availability heuristic can influence the probability judgement of the ratio of sharks and dolphins in the water. Thus, an individual who saw both a shark and a dolphin would assume a higher ratio of sharks in the water, even if there are more dolphins in reality. [31]
One of the earliest and most powerful critiques of the original Tversky and Kahneman [32] study on the availability heuristic was the Schwarz et al. [4] study which found that the ease of recall was a key component in determining whether a concept became available. Many studies since this criticism of the original availability heuristic model have repeated this initial criticism, that the ease of recall factor became an integral facet of the availability heuristic itself (see Research section).
Much of the criticism against the availability heuristic has claimed that making use of the content that becomes available in our mind is not based on the ease of recall as suggested by Schwarz et al. [4] For example, it could be argued that recalling more words that begin with K than words with the third letter being K could arise from how we categorize and process words into our memory. If we categorize words by the first letter and recall them through the same process, this would show more support for the representative heuristic than the availability heuristic. Based on the possibility of explanations such as these, some researchers have claimed that the classic studies on the availability heuristic are too vague in that they fail to account for people's underlying mental processes. Indeed, a study conducted by Wanke et al. demonstrated this scenario can occur in situations used to test the availability heuristic. [33]
A second line of study has shown that frequency estimation may not be the only strategy we use when making frequency judgments. A recent line of research has shown that our situational working memory can access long-term memories, and this memory retrieval process includes the ability to determine more accurate probabilities. [34]
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.
A heuristic or heuristic technique is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.
Heuristic reasoning is often based on induction, or on analogy ... Induction is the process of discovering general laws ... Induction tries to find regularity and coherence ... Its most conspicuous instruments are generalization, specialization, analogy. [...] Heuristic discusses human behavior in the face of problems [... that have been] preserved in the wisdom of proverbs.
Hindsight bias, also known as the knew-it-all-along phenomenon or creeping determinism, is the common tendency for people to perceive past events as having been more predictable than they were.
The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of a known prototypical event. It is one of a group of heuristics proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant.
Trait ascription bias is the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable in their personal traits across different situations. More specifically, it is a tendency to describe one's own behaviour in terms of situational factors while preferring to describe another's behaviour by ascribing fixed dispositions to their personality. This may occur because peoples' own internal states are more readily observable and available to them than those of others.
The conjunction fallacy is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.
The anchoring effect is a psychological phenomenon in which an individual's judgments or decisions are influenced by a reference point or "anchor" which can be completely irrelevant. Both numeric and non-numeric anchoring have been reported in research. In numeric anchoring, once the value of the anchor is set, subsequent arguments, estimates, etc. made by an individual may change from what they would have otherwise been without the anchor. For example, an individual may be more likely to purchase a car if it is placed alongside a more expensive model. Prices discussed in negotiations that are lower than the anchor may seem reasonable, perhaps even cheap to the buyer, even if said prices are still relatively higher than the actual market value of the car. Another example may be when estimating the orbit of Mars, one might start with the Earth's orbit and then adjust upward until they reach a value that seems reasonable.
The planning fallacy is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias and underestimate the time needed. This phenomenon sometimes occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias affects predictions only about one's own tasks. On the other hand, when outside observers predict task completion times, they tend to exhibit a pessimistic bias, overestimating the time needed. The planning fallacy involves estimates of task completion times more optimistic than those encountered in similar projects in the past.
The simulation heuristic is a psychological heuristic, or simplified mental strategy, according to which people determine the likelihood of an event based on how easy it is to picture the event mentally. Partially as a result, people experience more regret over outcomes that are easier to imagine, such as "near misses". The simulation heuristic was first theorized by psychologists Daniel Kahneman and Amos Tversky as a specialized adaptation of the availability heuristic to explain counterfactual thinking and regret. However, it is not the same as the availability heuristic. Specifically the simulation heuristic is defined as "how perceivers tend to substitute normal antecedent events for exceptional ones in psychologically 'undoing' this specific outcome."
The affect heuristic is a heuristic, a mental shortcut that allows people to make decisions and solve problems quickly and efficiently, in which current emotion—fear, pleasure, surprise, etc.—influences decisions. In other words, it is a type of heuristic in which emotional response, or "affect" in psychological terms, plays a lead role. It is a subconscious process that shortens the decision-making process and allows people to function without having to complete an extensive search for information. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. Reading the words "lung cancer" usually generates an affect of dread, while reading the words "mother's love" usually generates a feeling of affection and comfort. The affect heuristic is typically used while judging the risks and benefits of something, depending on the positive or negative feelings that people associate with a stimulus. It is the equivalent of "going with your gut". If their feelings towards an activity are positive, then people are more likely to judge the risks as low and the benefits high. On the other hand, if their feelings towards an activity are negative, they are more likely to perceive the risks as high and benefits low.
The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.
Optimism bias or optimistic bias is a cognitive bias that causes someone to believe that they themselves are less likely to experience a negative event. It is also known as unrealistic optimism or comparative optimism. It is common and transcends gender, ethnicity, nationality, and age. Autistic people are less susceptible to this kind of bias. It has also been reported in other animals, such as rats and birds.
In psychology, a heuristic is an easy-to-compute procedure or rule of thumb that people use when forming beliefs, judgments or decisions. The familiarity heuristic was developed based on the discovery of the availability heuristic by psychologists Amos Tversky and Daniel Kahneman; it happens when the familiar is favored over novel places, people, or things. The familiarity heuristic can be applied to various situations that individuals experience in day-to-day life. When these situations appear similar to previous situations, especially if the individuals are experiencing a high cognitive load, they may regress to the state of mind in which they have felt or behaved before. This heuristic is useful in most situations and can be applied to many fields of knowledge; however, there are both positives and negatives to this heuristic as well.
In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.
Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.
Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.
Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman. The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical.
Insensitivity to sample size is a cognitive bias that occurs when people judge the probability of obtaining a sample statistic without respect to the sample size. For example, in one study, subjects assigned the same probability to the likelihood of obtaining a mean height of above six feet [183 cm] in samples of 10, 100, and 1,000 men. In other words, variation is more likely in smaller samples, but people may not expect this.
Illusion of validity is a cognitive bias in which a person overestimates their ability to interpret and predict accurately the outcome when analyzing a set of data, in particular when the data analyzed show a very consistent pattern—that is, when the data "tell" a coherent story.
Intuition in the context of decision-making is defined as a "non-sequential information-processing mode." It is distinct from insight and can be contrasted with the deliberative style of decision-making. Intuition can influence judgment through either emotion or cognition, and there has been some suggestion that it may be a means of bridging the two. Individuals use intuition and more deliberative decision-making styles interchangeably, but there has been some evidence that people tend to gravitate to one or the other style more naturally. People in a good mood gravitate toward intuitive styles, while people in a bad mood tend to become more deliberative. The specific ways in which intuition actually influences decisions remain poorly understood.