Action bias is the psychological phenomenon where people tend to favor action over inaction, even when there is no indication that doing so would point towards a better result. It is an automatic response, similar to a reflex or an impulse and is not based on rational thinking. One of the first appearances of the term "action bias" in scientific journals was in a 2000 paper by Patt and Zechenhauser titled "Action Bias and Environmental Decisions", where its relevance in politics was expounded. [1]
People tend to have a preference for well-justified actions. The term “action bias” refers to the subset of such voluntary actions that one takes even when there is no explicitly good reason to do so. [2] In the case of a decision with both positive and negative outcomes, action will be taken in favor of achieving an apparent advantageous final result, which is preferred over inactivity. If besides gains, losses occur or resources are redistributed adversely, this will be neglected in the decision-making process. [2] Its opposite effect is the omission bias. [3]
Multiple different theories as to why people prefer action over inaction have been suggested. Humans might naturally aspire to act since it is perceived as being most beneficial, even though it can occasionally worsen the outcome of the action. [2] Inaction may be perceived as an inferior alternative to action. This view can be explained from an evolutionary perspective since early action proved to be adaptive in terms of survival, becoming a reinforced behavioral pattern. Even though living circumstances for people have changed beyond the need to prefer action over inaction to ensure survival, this bias persists in modern society as actions cause visible positive outcomes more so than omissions of actions, a link which becomes reinforced. [1]
There is a general tendency to reward action and punish inaction. [1] As shown by operant conditioning, rewards are more efficient in increasing the display of a behavior than punishments are in decreasing the likelihood of the display of a behavior. This results in humans choosing action rather than inaction. Engaging in action can also serve as means of signalling and emphasizing one's productivity to others which is rewarded by societal praise more than positive results originating from inactivity. Action also provides the doer with the impression of having control over a situation, which creates a feeling of personal security. [1] This is in contrast to inaction, which is more readily linked with feelings of regret in face of the lack of praise and even possible punishments for it. [4] The outcome associated with each action or inaction also affects future decisions, since the link is inevitably and immediately reinforced or punished each time a behavior is carried out; only a neutral outcome does not contribute to learning. [1] [2]
Another reason for the existence of the bias might be that people develop the decision heuristic of taking action but then transfer it to an inappropriate context, resulting in action bias. [2]
In politics, action bias is manifested by politicians not taking action on issues such as global warming, but wishing to appear to do so, so they make statements, which are not actually actions, and offer relatively ineffective proposals and implementations. Actions and promises of future actions are not taken primarily to bring about an impactful change, but rather to showcase that one is working on it and progressing. [1] [2] The symbolic power and external image of the action is far more powerful than its true benefit for change. [1] [2]
In the field of medicine, action bias can occur in diagnosis and subsequent treatment, which is, among other things, a problem caused by specific diagnostic criteria. If a patient does not meet enough criteria or happens to meet exactly enough criteria, a premature diagnosis or misdiagnosis may be the result. [3] This leads to the patient not receiving satisfactory or needed treatment. One way to counteract the action bias is to use a broader range of tests or to get a second opinion from colleagues and technical experts from relevant fields before making a final diagnosis. [3] In medical decision-making there is the predisposition of professionals to interfere, even if not interfering would be a better option. Here, the action bias takes the name of intervention bias and its existence has been proven by many studies in the medical community. [5]
Action bias occurs among patients as well. When equally presented by a physician with the options of either taking medicine or just resting, most patients greatly prefer taking the medicine. This preference prevails even when patients are warned that the medicine could cause certain side effects or when they are explicitly told that there would be no effect in taking the medicine. [6]
The causes of intervention bias in medicine are most likely an interplay of two other biases researched in humans: self-interest bias and confirmation bias. [5] Another reason for intervention bias can be found in the fear of malpractice cases, as possible charges can be pressed. [5]
The self-interest bias occurs if a person shows self-serving behaviors and justifies those in favor of their own interests. Medical intervention is partly guided by the financial self-interest of practitioners and the health-care industry. Industry-sponsored studies and analyses can lead to conflicts of interest and biased interpretations of the results. The specialists then make questionable decisions and defend already biased information. [5] Doctors seem to be more satisfied when they have a greater involvement in their patients' treatment, which means that the amount of intervention is closely linked to career happiness and personal gratification. [5] Confirmation bias influences human decision making as sources that confirm one's pre-existing hypotheses are incorporated more readily and preferably than any challenging ideas. Those studies and assessments that justify and promote medical intervention are given more emphasis. Data that contradicts the reviewer's assumptions are either ignored or their own experience and evaluation are viewed as more reliable for the practitioner. [5]
Due to the action bias, medical intervention becomes less objective, the physician's primary focus can no longer be the best possible therapy for the patient, possible therapies may be implemented without proper, tailored testing. [5] [7] Other consequences include incorrect and biased medical advice, and additionally physical harm to the patient and collapse of health care systems. [5] Although physicians also have the choice to wait and see if the symptoms subside or intensify and then perform a follow-up check, which would be temporary inaction, instead it is common to perform direct testing and prescription of medication. [3]
According to some psychologists, the goalkeeper shows an action bias in over 90 percent of the penalty kicks in soccer by diving to either the left or the right. These theorists assert that it is more effective to stand still, or to wait and see which direction the ball is kicked before moving, because guessing wrong will almost guarantee giving up a goal. [8] Researchers surmise that goalkeepers take the risk of guessing because "action" is preferred by their teammates, and success will bring social recognition and other rewards. [8]
However, this analysis ignores game theory and the dynamics of the sport. Because the penalty spot is only 12 yards away, the goalposts are 24 feet apart, the crossbar is 8 feet high, and the ball will be struck with great force, the goalkeeper cannot stand still and wait for the ball to be struck, because they will not have time to reach it. The ball could go to any of the four corners. To have a chance to make a save, the optimum strategy is to guess the target location and begin moving before the opponent's foot touches the ball. This context negates the claim of bias, because the goalkeeper is expected to put in the visible effort to make a save and actively prevent a goal, rather than arrive too late by waiting for directional certainty. Some penalty takers counter this strategy by rolling or chipping the ball down the middle, which is called a Panenka penalty, after a Czech player who made it famous at the UEFA Euro 1976 final.
Action bias is also influenced by previous outcomes. If a team loses a match, the coach is more likely to choose action by changing some of the players, than inaction, even though this might not necessarily lead to a better performance. [4] As expressed by one coach, “Just because I can do something doesn’t mean that I should, or that that activity is relevant.” [9]
Action bias also influences decision-making in the field of economics and management. In the situations where there is an economic downfall, the central banks and governments experience the pressure to take action, as they feel increased scrutiny from the public. As they are expected to fix the situation, action is seen as more appropriate than inaction. Even if the outcome is not successful, by taking action public figures can avoid criticism more easily. [8] In the cases of good economic performance, the authorities are more inclined towards an omission bias as they do not wish to be accused of making the wrong choices that might destroy the current equilibrium. [8] The action/omission bias can be seen in other similar scenarios such as: investors changing their portfolio, switching a company's strategy, applying for a different job, moving to a different city. At the macro-economic level, the action/omission bias comes into play when discussing changes of politics-related variables, such as interest rates, tax rates and various types of expenditures. [8]
The effect of action bias in environmental policy decisions has been investigated by Anthony Patt and Richard Zeckhauser. They argued that action bias is more likely to lead to nonrational decision-making in this domain due to uncertainty and delayed effect of actions, contributions coming from many parties, no effective markets, unclear objectives and few strong incentives. [2] The study concluded that the value of a decision is influenced by one's perceived involvement, individual susceptibility for action bias, as well as framing and context, leading to the occurrence of action bias in environmental policies. [2]
The utility-based action bias is a type of action bias that underlies purposive behavior. It works by comparing the advantages of possible effects of different actions and, as a result, it selects the action that will lead to the outcome with the highest utility value. The values of different options are then predicted and compared, and the action with the most chance of reward will be chosen. [10] Advantages of this bias include finding the most beneficial option available in the environment. The main disadvantage is that the subject needs to test the environment through trial and error in order to identify the utility value of each action. [10] When unfamiliar with a new environment, the person will often choose an action that was proven advantageous in a previous situation. If this is not suitable in the current scenario, the utility value of the action decreases, and the person will opt for a different action, even though changing strategy might not be entirely beneficial. [10] The utility-based action bias is the opposite of the goal-based action selection, which aims at completing a goal without taking into consideration the utility value of the actions performed. Unlike the utility-based action bias, not all possible actions are compared. Once an action that leads to the goal is found, the other options are disregarded, making it a less time-consuming strategy. [10]
The term single-action bias was coined by Elke U. Weber when she took notice of farmers’ reactions to climate change. Decision-makers tend to take one action to lower a risk that they are concerned about, but they are much less likely to take additional steps that would provide risk reduction. The single action taken is not always the most effective one. Although the reason for this phenomenon is not yet fully confirmed, presumably the first action suffices in reducing the feeling of worry, which is why further action is often not taken. [11] For example, Weber found that farmers in the early 1990s who started to worry about the consequences of global warming either changed something in their production practice, their pricing, or lobbied for government interventions. What they generally did not do is engage in more than one of those actions. This again shows that undertaking a single action possibly fulfills one's need to do something; this could prevent further action. [11] In the end, the single-action bias improves a person's self-image and eliminates cognitive dissonance by giving the false impression that they have been contributing to the greater good. [12]
Another example of single-action bias is house owners that live in coastal regions that are likely to be flooded due to sea level rise (SLR). They can take small actions by piling resources or making sandbags in case of flooding or bigger actions by taking out flood insurance, elevating their homes or moving into a region that is less at risk for flooding. The first smaller action they take (making sandbags) takes away their anxiety about possible flooding and thereby makes it less likely to take actions that might have a better outcome in the long run, such as moving into another region. [13] An option to eliminate the single-action bias is to have group discussions, in which people suggest different ideas to find a solution. This would give the individual more alternatives to solve the problem. [14]
Awareness of the action bias can help to carefully think about the consequences of inaction versus action in a certain situation. This leads to the process not being as impulsive as before and includes logical thinking which facilitates choosing the most efficient outcome. Inaction, in some situations, can enhance patience and self-control. [1] New contexts that encourage making thoughtful decisions or checking for an overview of possibilities can also be beneficial. [1] [15]
In medical contexts, full disclosure about the effects of action, especially negative side effects of medication, and inaction during treatment can lead to a lower effect of the action bias. The percentage of people choosing medication goes even lower (10%) when a doctor actively discourages the use of medication. [6]
Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients". The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.
In economics and business decision-making, a sunk cost is a cost that has already been incurred and cannot be recovered. Sunk costs are contrasted with prospective costs, which are future costs that may be avoided if action is taken. In other words, a sunk cost is a sum paid in the past that is no longer relevant to decisions about the future. Even though economists argue that sunk costs are no longer relevant to future rational decision-making, people in everyday life often take previous expenditures in situations, such as repairing a car or house, into their future decisions regarding those properties.
Behavioral economics studies the effects of psychological, cognitive, emotional, cultural and social factors in the decisions of individuals or institutions, and how these decisions deviate from those implied by classical economic theory.
Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.
Prospect theory is a theory of behavioral economics and behavioral finance that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.
Cost-effectiveness analysis (CEA) is a form of economic analysis that compares the relative costs and outcomes (effects) of different courses of action. Cost-effectiveness analysis is distinct from cost–benefit analysis, which assigns a monetary value to the measure of effect. Cost-effectiveness analysis is often used in the field of health services, where it may be inappropriate to monetize health effect. Typically the CEA is expressed in terms of a ratio where the denominator is a gain in health from a measure and the numerator is the cost associated with the health gain. The most commonly used outcome measure is quality-adjusted life years (QALY).
In psychology, decision-making is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action.
Hindsight bias, also known as the knew-it-all-along phenomenon or creeping determinism, is the common tendency for people to perceive past events as having been more predictable than they were.
Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.
Loss aversion is a psychological and economic concept which refers to how outcomes are interpreted as gains and losses where losses are subject to more sensitivity in people's responses compared to equivalent gains acquired. Kahneman and Tversky (1992) have suggested that losses can be twice as powerful, psychologically, as gains. When defined in terms of the utility function shape as in the Cumulative Prospect Theory (CPT), losses have a steeper utility than gains, thus being more "painful" than the satisfaction from a comparable gain as shown in Figure 1. Loss aversion was first proposed by Amos Tversky and Daniel Kahneman as an important framework for Prospect Theory - an analysis of decision under risk.
Status quo bias is an emotional bias; a preference for the maintenance of one's current or previous state of affairs, or a preference to not undertake any action to change this current or previous state. The current baseline is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.
Mental accounting is a model of consumer behaviour developed by Richard Thaler that attempts to describe the process whereby people code, categorize and evaluate economic outcomes. Mental accounting incorporates the economic concepts of prospect theory and transactional utility theory to evaluate how people create distinctions between their financial resources in the form of mental accounts, which in turn impacts the buyer decision process and reaction to economic outcomes. People are presumed to make mental accounts as a self control strategy to manage and keep track of their spending and resources. People budget money into mental accounts for savings or expense categories. People also are assumed to make mental accounts to facilitate savings for larger purposes. Mental accounting can result in people demonstrating greater loss aversion for certain mental accounts, resulting in cognitive bias that incentivizes systematic departures from consumer rationality. Through increased understanding of mental accounting differences in decision making based on different resources, and different reactions based on similar outcomes can be greater understood.
The quality-adjusted life year (QALY) is a generic measure of disease burden, including both the quality and the quantity of life lived. It is used in economic evaluation to assess the value of medical interventions. One QALY equates to one year in perfect health. QALY scores range from 1 to 0 (dead). QALYs can be used to inform health insurance coverage determinations, treatment decisions, to evaluate programs, and to set priorities for future programs.
Affective forecasting is the prediction of one's affect in the future. As a process that influences preferences, decisions, and behavior, affective forecasting is studied by both psychologists and economists, with broad applications.
Regret is the emotion of wishing one had made a different decision in the past, because the consequences of the decision one did make were unfavorable.
Omission bias is the phenomenon in which people prefer omission (inaction) over commission (action) and people tend to judge harm as a result of commission more negatively than harm as a result of omission. It can occur due to a number of processes, including psychological inertia, the perception of transaction costs, and the perception that commissions are more causal than omissions. In social political terms the Universal Declaration of Human Rights establishes how basic human rights are to be assessed in article 2, as "without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status." criteria that are often subject to one or another form of omission bias. It is controversial as to whether omission bias is a cognitive bias or is often rational. The bias is often showcased through the trolley problem and has also been described as an explanation for the endowment effect and status quo bias.
Self-care has been defined as the process of establishing behaviors to ensure holistic well-being of oneself, to promote health, and actively manage illness when it occurs. Individuals engage in some form of self-care daily with food choices, exercise, sleep, reading and dental care. Self-care is not only a solo activity as the community—a group that supports the person performing self-care—overall plays a large role in access to, implementation of, and success of self-care activities.
Shared decision-making in medicine (SDM) is a process in which both the patient and physician contribute to the medical decision-making process and agree on treatment decisions. Health care providers explain treatments and alternatives to patients and help them choose the treatment option that best aligns with their preferences as well as their unique cultural and personal beliefs.
Present bias is the tendency to settle for a smaller present reward rather than wait for a larger future reward, in a trade-off situation. It describes the trend of overvaluing immediate rewards, while putting less worth in long-term consequences. The present bias can be used as a measure for self-control, which is a trait related to the prediction of secure life outcomes.