Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision-making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times.
The paper "Decision Field Theory" was published by Jerome R. Busemeyer and James T. Townsend in 1993. [1] [2] [3] [4] The DFT has been shown to account for many puzzling findings regarding human choice behavior including violations of stochastic dominance, violations of strong stochastic transitivity, [5] [6] [7] violations of independence between alternatives, serial-position effects on preference, speed accuracy tradeoff effects, inverse relation between probability and decision time, changes in decisions under time pressure, as well as preference reversals between choices and prices. The DFT also offers a bridge to neuroscience. [8] Recently, the authors of decision field theory also have begun exploring a new theoretical direction called Quantum Cognition.
The name decision field theory was chosen to reflect the fact that the inspiration for this theory comes from an earlier approach – avoidance conflict model contained in Kurt Lewin's general psychological theory, which he called field theory. DFT is a member of a general class of sequential sampling models that are commonly used in a variety of fields in cognition. [9] [10] [11] [12] [13] [14] [15]
The basic ideas underlying the decision process for sequential sampling models is illustrated in Figure 1 below. Suppose the decision maker is initially presented with a choice between three risky prospects, A, B, C, at time t = 0. The horizontal axis on the figure represents deliberation time (in seconds), and the vertical axis represents preference strength. Each trajectory in the figure represents the preference state for one of the risky prospects at each moment in time. [4]
Intuitively, at each moment in time, the decision maker thinks about various payoffs of each prospect, which produces an affective reaction, or valence, to each prospect. These valences are integrated across time to produce the preference state at each moment. In this example, during the early stages of processing (between 200 and 300 ms), attention is focused on advantages favoring prospect C, but later (after 600 ms) attention is shifted toward advantages favoring prospect A. The stopping rule for this process is controlled by a threshold (which is set equal to 1.0 in this example): the first prospect to reach the top threshold is accepted, which in this case is prospect A after about two seconds. Choice probability is determined by the first option to win the race and cross the upper threshold, and decision time is equal to the deliberation time required by one of the prospects to reach this threshold. [4]
The threshold is an important parameter for controlling speed–accuracy tradeoffs. If the threshold is set to a lower value (about .30) in Figure 1, then prospect C would be chosen instead of prospect A (and done so earlier). Thus decisions can reverse under time pressure. [16] High thresholds require a strong preference state to be reached, which allows more information about the prospects to be sampled, prolonging the deliberation process, and increasing accuracy. Low thresholds allow a weak preference state to determine the decision, which cuts off sampling information about the prospects, shortening the deliberation process, and decreasing accuracy. Under high time pressure, decision makers must choose a low threshold; but under low time pressure, a higher threshold can be used to increase accuracy. Very careful and deliberative decision makers tend to use a high threshold, and impulsive and careless decision makers use a low threshold. [4] To provide a bit more formal description of the theory, assume that the decision maker has a choice among three actions, and also suppose for simplicity that there are only four possible final outcomes. Thus each action is defined by a probability distribution across these four outcomes. The affective values produced by each payoff are represented by the values mj. At any moment in time, the decision maker anticipates the payoff of each action, which produces a momentary evaluation, Ui(t), for action i. This momentary evaluation is an attention-weighted average of the affective evaluation of each payoff: Ui(t) = Σ Wij(t)mj. The attention weight at time t, Wij(t), for payoff j offered by action i, is assumed to fluctuate according to a stationary stochastic process. This reflects the idea that attention is shifting from moment to moment, causing changes in the anticipated payoff of each action across time. The momentary evaluation of each action is compared with other actions to form a valence for each action at each moment, vi(t) = Ui(t) – U.(t), where U.(t) equals the average across all the momentary actions. The valence represents the momentary advantage or disadvantage of each action. The total valence balances out to zero so that all the options cannot become attractive simultaneously. Finally, the valences are the inputs to a dynamic system that integrates the valences over time to generate the output preference states. The output preference state for action i at time t is symbolized as Pi(t). The dynamic system is described by the following linear stochastic difference equation for a small time step h in the deliberation process: Pi(t+h) = Σ sijPj(t)+vi(t+h).The positive self feedback coefficient, sii = s > 0, controls the memory for past input valences for a preference state. Values of sii < 1 suggest decay in the memory or impact of previous valences over time, whereas values of sii > 1 suggest growth in impact over time (primacy effects). The negative lateral feedback coefficients, sij = sji < 0 for i not equal to j, produce competition among actions so that the strong inhibit the weak. In other words, as preference for one action grows stronger, then this moderates the preference for other actions. The magnitudes of the lateral inhibitory coefficients are assumed to be an increasing function of the similarity between choice options. These lateral inhibitory coefficients are important for explaining context effects on preference described later. Formally, this is a Markov process; matrix formulas have been mathematically derived for computing the choice probabilities and distribution of choice response times. [4]
The decision field theory can also be seen as a dynamic and stochastic random walk theory of decision making, presented as a model positioned between lower-level neural activation patterns and more complex notions of decision making found in psychology and economics. [4]
The DFT is capable of explaining context effects that many decision making theories are unable to explain. [17]
Many classic probabilistic models of choice satisfy two rational types of choice principles. One principle is called independence of irrelevant alternatives, and according to this principle, if the probability of choosing option X is greater than option Y when only X,Y are available, then option X should remain more likely to be chosen over Y even when a new option Z is added to the choice set. In other words, adding an option should not change the preference relation between the original pair of options. A second principle is called regularity, and according to this principle, the probability of choosing option X from a set containing only X and Y should be greater than or equal to the probability of choosing option X from a larger set containing options X, Y, and a new option Z. In other words, adding an option should only decrease the probability of choosing one of the original pair of options. However, empirical findings obtained by consumer researchers studying human choice behavior have found systematic context effects that systematically violate both of these principles.
The first context effect is the similarity effect. This effect occurs with the introduction of a third option S that is similar to X but it is not dominated by X. For example, suppose X is a BMW, Y is a Ford focus, and S is an Audi. The Audi is similar to the BMW because both are not very economical but they are both high quality and sporty. The Ford focus is different from the BMW and Audi because it is more economical but lower quality. Suppose in a binary choice, X is chosen more frequently than Y. Next suppose a new choice set is formed by adding an option S that is similar to X. If X is similar to S, and both are very different from Y, the people tend to view X and S as one group and Y as another option. Thus the probability of Y remains the same whether S is presented as an option or not. However, the probability of X will decrease by approximately half with the introduction of S. This causes the probability of choosing X to drop below Y when S is added to the choice set. This violates the independence of irrelevant alternatives property because in a binary choice, X is chosen more frequently than Y, but when S is added, then Y is chosen more frequently than X.
The second context effect is the compromise effect. This effect occurs when an option C is added that is a compromise between X and Y. For example, when choosing between C = Honda and X = BMW, the latter is less economical but higher quality. However, if another option Y = Ford Focus is added to the choice set, then C = Honda becomes a compromise between X = BMW and Y = Ford Focus. Suppose in a binary choice, X (BMW) is chosen more often than C (Honda). But when option Y (Ford Focus) is added to the choice set, then option C (Honda) becomes the compromise between X (BMW) and Y (Ford Focus), and C is then chosen more frequently than X. This is another violation of the independence of irrelevant alternatives property because X is chosen more often than C in a binary choice, but C when option Y is added to the choice set, then C is chosen more often than X.
The third effect is called the attraction effect. This effect occurs when the third option D is very similar to X but D is defective compared to X. For example D may be a new sporty car developed by a new manufacturer that is similar to option X = BMW, but costs more than the BMW. Therefore, there is little or no reason to choose D over X, and in this situation D is rarely ever chosen over X. However, adding D to a choice set boosts the probability of choosing X. In particular, the probability of choosing X from a set containing X,Y,D is larger than the probability of choosing X from a set containing only X and Y. The defective option D makes X shine, and this attraction effect violates the principle of regularity, which says that adding another option cannot increase the popularity of an option over the original subset.
DFT accounts for all three effects using the same principles and same parameters across all three findings. According to DFT, the attention switching mechanism is crucial for producing the similarity effect, but the lateral inhibitory connections are critical for explaining the compromise and attraction effects. If the attention switching process is eliminated, then the similarity effect disappears, and if the lateral connections are all set to zero, then the attraction and compromise effects disappear. This property of the theory entails an interesting prediction about the effects of time pressure on preferences. The contrast effects produced by lateral inhibition require time to build up, which implies that the attraction and compromise effects should become larger under prolonged deliberation (see Roe, Busemeyer & Townsend 2001). Alternatively, if context effects are produced by switching from a weighted average rule under binary choice to a quick heuristic strategy for the triadic choice, then these effects should get larger under time pressure. Empirical tests show that prolonging the decision process increases the effects [18] [19] and time pressure decreases the effects. [20]
The Decision Field Theory has demonstrated an ability to account for a wide range of findings from behavioral decision making for which the purely algebraic and deterministic models often used in economics and psychology cannot account. Recent studies that record neural activations in non-human primates during perceptual decision making tasks have revealed that neural firing rates closely mimic the accumulation of preference theorized by behaviorally-derived diffusion models of decision making. [8]
The decision processes of sensory-motor decisions are beginning to be fairly well understood both at the behavioral and neural levels. Typical findings indicate that neural activation regarding stimulus movement information is accumulated across time up to a threshold, and a behavioral response is made as soon as the activation in the recorded area exceeds the threshold. [21] [22] [23] [24] [25] A conclusion that one can draw is that the neural areas responsible for planning or carrying out certain actions are also responsible for deciding the action to carry out, a decidedly embodied notion. [8]
Mathematically, the spike activation pattern, as well as the choice and response time distributions, can be well described by what are known as diffusion models—especially in two-alternative forced choice tasks. [26] Diffusion models, such as the decision field theory, can be viewed as stochastic recurrent neural network models, except that the dynamics are approximated by linear systems. The linear approximation is important for maintaining a mathematically tractable analysis of systems perturbed by noisy inputs. In addition to these neuroscience applications, diffusion models (or their discrete time, random walk, analogues) have been used by cognitive scientists to model performance in a variety of tasks ranging from sensory detection, [13] and perceptual discrimination, [11] [12] [14] to memory recognition, [15] and categorization. [9] [10] Thus, diffusion models provide the potential to form a theoretical bridge between neural models of sensory-motor tasks and behavioral models of complex-cognitive tasks. [8]
Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. The term satisficing, a portmanteau of satisfy and suffice, was introduced by Herbert A. Simon in 1956, although the concept was first posited in his 1947 book Administrative Behavior. Simon used satisficing to explain the behavior of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterized by computational intractability or a lack of information, both of which preclude the use of mathematical optimization procedures. He observed in his Nobel Prize in Economics speech that "decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world of management science".
In the field of psychology, cognitive dissonance is the perception of contradictory information and the mental toll of it. Relevant items of information include a person's actions, feelings, ideas, beliefs, values, and things in the environment. Cognitive dissonance is typically experienced as psychological stress when persons participate in an action that goes against one or more of those things. According to this theory, when an action or idea is psychologically inconsistent with the other, people do all in their power to change either so that they become consistent. The discomfort is triggered by the person's belief clashing with new information perceived, wherein the individual tries to find a way to resolve the contradiction to reduce their discomfort.
Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.
In psychology, decision-making is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action.
Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.
Loss aversion is a psychological and economic concept, which refers to how outcomes are interpreted as gains and losses where losses are subject to more sensitivity in people's responses compared to equivalent gains acquired. Kahneman and Tversky (1992) suggested that losses can be twice as powerful psychologically as gains.
The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour.
Status quo bias is an emotional bias; a preference for the maintenance of one's current or previous state of affairs, or a preference to not undertake any action to change this current or previous state. The current baseline is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.
In economics, hyperbolic discounting is a time-inconsistent model of delay discounting. It is one of the cornerstones of behavioral economics and its brain-basis is actively being studied by neuroeconomics researchers.
In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.
Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior. The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.
Pairwise comparison generally is any process of comparing entities in pairs to judge which of each entity is preferred, or has a greater amount of some quantitative property, or whether or not the two entities are identical. The method of pairwise comparison is used in the scientific study of preferences, attitudes, voting systems, social choice, public choice, requirements engineering and multiagent AI systems. In psychology literature, it is often referred to as paired comparison.
Two-alternative forced choice (2AFC) is a method for measuring the sensitivity of a person or animal to some particular sensory input, stimulus, through that observer's pattern of choices and response times to two versions of the sensory input. For example, to determine a person's sensitivity to dim light, the observer would be presented with a series of trials in which a dim light was randomly either in the top or bottom of the display. After each trial, the observer responds "top" or "bottom". The observer is not allowed to say "I do not know", or "I am not sure", or "I did not see anything". In that sense the observer's choice is forced between the two alternatives.
Fuzzy-trace theory (FTT) is a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd to explain cognitive phenomena, particularly in memory and reasoning.
In psychology, economics and philosophy, preference is a technical term usually used in relation to choosing between alternatives. For example, someone prefers A over B if they would rather choose A than B. Preferences are central to decision theory because of this relation to behavior. Some methods such as Ordinal Priority Approach use preference relation for decision-making. As connative states, they are closely related to desires. The difference between the two is that desires are directed at one object while preferences concern a comparison between two alternatives, of which one is preferred to the other.
Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.
Ecological rationality is a particular account of practical rationality, which in turn specifies the norms of rational action – what one ought to do in order to act rationally. The presently dominant account of practical rationality in the social and behavioral sciences such as economics and psychology, rational choice theory, maintains that practical rationality consists in making decisions in accordance with some fixed rules, irrespective of context. Ecological rationality, in contrast, claims that the rationality of a decision depends on the circumstances in which it takes place, so as to achieve one's goals in this particular context. What is considered rational under the rational choice account thus might not always be considered rational under the ecological rationality account. Overall, rational choice theory puts a premium on internal logical consistency whereas ecological rationality targets external performance in the world. The term ecologically rational is only etymologically similar to the biological science of ecology.
Stochastic transitivity models are stochastic versions of the transitivity property of binary relations studied in mathematics. Several models of stochastic transitivity exist and have been used to describe the probabilities involved in experiments of paired comparisons, specifically in scenarios where transitivity is expected, however, empirical observations of the binary relation is probabilistic. For example, players' skills in a sport might be expected to be transitive, i.e. "if player A is better than B and B is better than C, then player A must be better than C"; however, in any given match, a weaker player might still end up winning with a positive probability. Tightly matched players might have a higher chance of observing this inversion while players with large differences in their skills might only see these inversions happen seldom. Stochastic transitivity models formalize such relations between the probabilities and the underlying transitive relation.
Hillel J. Einhorn was an American psychologist who played a key role in the development of the field of behavioral decision theory. Einhorn earned BA and MA degrees at Brooklyn College, married Susan Michaels in 1966, and received his PhD in psychology from Wayne State University in 1969 under the supervision of Alan Bass. In 1969, Einhorn joined the faculty of the Graduate School of Business at the University of Chicago. He was promoted to professor in 1976 and appointed to the Wallace W. Booth professorship in 1986. In addition to his research contributions, Einhorn restructured the behavioral science curriculum by providing a specific focus on behavioral decision theory, and founded the Center for Decision Research at the University of Chicago in 1977. Einhorn died of Hodgkin's disease on January 8, 1987, at the age of 45.
In economics and psychology, a random utility model, also called stochastic utility model, is a mathematical description of the preferences of a person, whose choices are not deterministic, but depend on a random state variable.