Two-alternative forced choice (2AFC) is a method for measuring the sensitivity of a person or animal to some particular sensory input, stimulus, through that observer's pattern of choices and response times to two versions of the sensory input. For example, to determine a person's sensitivity to dim light, the observer would be presented with a series of trials in which a dim light was randomly either in the top or bottom of the display. After each trial, the observer responds "top" or "bottom". The observer is not allowed to say "I do not know", or "I am not sure", or "I did not see anything". In that sense the observer's choice is forced between the two alternatives.
Both options can be presented concurrently (as in the above example) or sequentially in two intervals (also known as two-interval forced choice, 2IFC). For example, to determine sensitivity to a dim light in a two-interval forced choice procedure, an observer could be presented with series of trials comprising two sub-trials (intervals) in which the dim light is presented randomly in the first or the second interval. After each trial, the observer responds only "first" or "second".
The term 2AFC is sometimes used to describe a task in which an observer is presented with a single stimulus and must choose between one of two alternatives. For example in a lexical decision task a participant observes a string of characters and must respond whether the string is a "word" or "non-word". Another example is the random dot kinetogram task, in which a participant must decide whether a group of moving dots are predominately moving "left" or "right". The results of these tasks, sometimes called yes-no tasks, are much more likely to be affected by various response biases than 2AFC tasks. For example, with extremely dim lights, a person might respond, completely truthfully, "no" (i.e., "I did not see any light") on every trial, whereas the results of a 2AFC task will show the person can reliably determine the location (top or bottom) of the same, extremely dim light.
2AFC is a method of psychophysics developed by Gustav Theodor Fechner. [1]
There are various manipulations in the design of the task, engineered to test specific behavioral dynamics of choice. In one well known experiment of attention that examines the attentional shift, the Posner Cueing Task uses a 2AFC design to present two stimuli representing two given locations. [2] In this design there is an arrow that cues which stimulus (location) to attend to. The person then has to make a response between the two stimuli (locations) when prompted. In animals, the 2AFC task has been used to test reinforcement probability learning, for example such as choices in pigeons after reinforcement of trials. [3] A 2AFC task has also been designed to test decision making and the interaction of reward and probability learning in monkeys. [4]
Monkeys were trained to look at a center stimulus and were then presented with two salient stimuli side by side. A response can then be made in the form of a saccade to the left or to the right stimulus. A juice reward is then administered after each response. The amount of juice reward is then varied to modulate choice.
In a different application, the 2AFC is designed to test discrimination of motion perception. The random dot motion coherence task, introduces a random dot kinetogram, with a percentage of net coherent motion distributed across the random dots. [5] [6] The percentage of dots moving together in a given direction determines the coherence of motion towards the direction. In most experiments, the participant must make a choice response between two directions of motion (e.g. up or down), usually indicated by a motor response such as a saccade or pressing a button.
It is possible to introduce biases in decision making in the 2AFC task. For example, if one stimulus occurs with more frequency than the other, then the frequency of exposure to the stimuli may influence the participant's beliefs about the probability of the occurrence of the alternatives. [4] [7] Introducing biases in the 2AFC task is used to modulate decision making and examine the underlying processes.
The 2AFC task has yielded consistent behavioral results on decision-making, which lead to the development of theoretical and computational models of the dynamics and results of decision-making. [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
Suppose the two stimuli and in the 2AFC task are random variables from two different categories and , and the task is to decide which was which. A common model is to assume that the stimuli came from normal distributions and . Under this normal model, the optimal decision strategy (of the ideal observer) is to decide which of two bivariate normal distributions is more likely to produce the tuple : the joint distributions of and , or of and . [18]
The probability of error with this ideal decision strategy is given by the generalized chi-square distribution: , where
This model can also extend to the cases when each of the two stimuli is itself a multivariate normal vector, and also to the situations when the two categories have different prior probabilities, or the decisions are biased due to different values attached to the possible outcomes. [18]
There are typically three assumptions made by computational models using the 2AFC:
i) evidence favoring each alternative is integrated over time; ii) the process is subject to random fluctuations; and iii) the decision is made when sufficient evidence has accumulated favoring one alternative over the other.
— Bogacz et al., The Physics of Optimal Decision Making [7]
It is typically assumed that the difference in evidence favoring each alternative is the quantity tracked over time and that which ultimately informs the decision; however, evidence for different alternatives could be tracked separately. [7]
The drift-diffusion model (DDM) is a well defined [19] model, that is proposed to implement an optimal decision policy for 2AFC. [20] It is the continuous analog of a random walk model. [7] The DDM assumes that in a 2AFC task, the subject is accumulating evidence for one or other of the alternatives at each time step, and integrating that evidence until a decision threshold is reached. As the sensory input which constitutes the evidence is noisy, the accumulation to the threshold is stochastic rather than deterministic – this gives rise to the directed random-walk-like behavior. The DDM has been shown to describe accuracy and reaction times in human data for 2AFC tasks. [13] [19]
The accumulation of evidence in the DDM is governed according to the following formula:
At time zero, the evidence accumulated, x, is set equal to zero. At each time step, some evidence, A, is accumulated for one of the two possibilities in the 2AFC. A is positive if the correct response is represented by the upper threshold, and negative if the lower. In addition, a noise term, cdW, is added to represent noise in input. On average, the noise will integrate to zero. [7] The extended DDM [13] allows for selection of and the starting value of from separate distributions – this provides a better fit to experimental data for both accuracy and reaction times. [21] [22]
The Ornstein–Uhlenbeck model [14] extends the DDM by adding another term, , to the accumulation that is dependent on the current accumulation of evidence – this has the net effect of increasing the rate of accumulation towards the initially preferred option.
In the race model, [11] [12] [23] evidence for each alternative is accumulated separately, and a decision made either when one of the accumulators reaches a predetermined threshold, or when a decision is forced and then the decision associated with the accumulator with the highest evidence is chosen. This can be represented formally by:
The race model is not mathematically reducible to the DDM, [7] and hence cannot be used to implement an optimal decision procedure.
The mutual inhibition model [16] also uses two accumulators to model the accumulation of evidence, as with the race model. In this model the two accumulators have an inhibitory effect on each other, so as evidence is accumulated in one, it dampens the accumulation of evidence in the other. In addition, leaky accumulators are used, so that over time evidence accumulated decays – this helps to prevent runaway accumulation towards one alternative based on a short run of evidence in one direction. Formally, this can be shown as:
Where is a shared decay rate of the accumulators, and is the rate of mutual inhibition.
The feedforward inhibition model [24] is similar to the mutual inhibition model, but instead of being inhibited by the current value of the other accumulator, each accumulator is inhibited by a fraction of the input to the other. It can be formally stated thus:
Where is the fraction of accumulator input that inhibits the alternate accumulator.
Wang [25] suggested the pooled inhibition model, where a third, decaying accumulator is driven by accumulation in both of the accumulators used for decision making, and in addition to the decay used in the mutual inhibition model, each of the decision driving accumulators self-reinforce based on their current value. It can be formally stated thus:
The third accumulator has an independent decay coefficient, , and increases based on the current values of the other two accumulators, at a rate modulated by .
In the parietal lobe, lateral intraparietal cortex (LIP) neuron firing rate in monkeys predicted the choice response of direction of motion suggesting this area is involved in decision making in the 2AFC. [4] [24] [26]
Neural data recorded from LIP neurons in rhesus monkeys supports the DDM, as firing rates for the direction selective neuronal populations sensitive to the two directions used in the 2AFC task increase firing rates at stimulus onset, and average activity in the neuronal populations is biased in the direction of the correct response. [24] [27] [28] [29] In addition, it appears that a fixed threshold of neuronal spiking rate is used as the decision boundary for each 2AFC task. [30]
In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression estimates the parameters of a logistic model. In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.
Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:
Let us assume that the persistence or repetition of a reverberatory activity tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.
The Lotka–Volterra equations, also known as the Lotka–Volterra predator–prey model, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations:
In economics, hyperbolic discounting is a time-inconsistent model of delay discounting. It is one of the cornerstones of behavioral economics and its brain-basis is actively being studied by neuroeconomics researchers.
In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables.
Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct for decision trees' habit of overfitting to their training set.
The secretary problem demonstrates a scenario involving optimal stopping theory that is studied extensively in the fields of applied probability, statistics, and decision theory. It is also known as the marriage problem, the sultan's dowry problem, the fussy suitor problem, the googol game, and the best choice problem. Its solution is also known as the 37% rule.
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.
In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing everywhere, and lies as close to the observations as possible.
The paradox of enrichment is a term from population ecology coined by Michael Rosenzweig in 1971. He described an effect in six predator–prey models where increasing the food available to the prey caused the predator's population to destabilize. A common example is that if the food supply of a prey such as a rabbit is overabundant, its population will grow unbounded and cause the predator population to grow unsustainably large. That may result in a crash in the population of the predators and possibly lead to local eradication or even species extinction.
Mental chronometry is the scientific study of processing speed or reaction time on cognitive tasks to infer the content, duration, and temporal sequencing of mental operations. Reaction time is measured by the elapsed time between stimulus onset and an individual's response on elementary cognitive tasks (ECTs), which are relatively simple perceptual-motor tasks typically administered in a laboratory setting. Mental chronometry is one of the core methodological paradigms of human experimental, cognitive, and differential psychology, but is also commonly analyzed in psychophysiology, cognitive neuroscience, and behavioral neuroscience to help elucidate the biological mechanisms underlying perception, attention, and decision-making in humans and other species.
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's athmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not. The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision-making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times.
In statistics and econometrics, the multivariate probit model is a generalization of the probit model used to estimate several correlated binary outcomes jointly. For example, if it is believed that the decisions of sending at least one child to public school and that of voting in favor of a school budget are correlated, then the multivariate probit model would be appropriate for jointly predicting these two choices on an individual-specific basis. J.R. Ashford and R.R. Sowden initially proposed an approach for multivariate probit analysis. Siddhartha Chib and Edward Greenberg extended this idea and also proposed simulation-based inference methods for the multivariate probit model which simplified and generalized parameter estimation.
In computational neuroscience, the Wilson–Cowan model describes the dynamics of interactions between populations of very simple excitatory and inhibitory model neurons. It was developed by Hugh R. Wilson and Jack D. Cowan and extensions of the model have been widely used in modeling neuronal populations. The model is important historically because it uses phase plane methods and numerical solutions to describe the responses of neuronal populations to stimuli. Because the model neurons are simple, only elementary limit cycle behavior, i.e. neural oscillations, and stimulus-dependent evoked responses are predicted. The key findings include the existence of multiple stable states, and hysteresis, in the population response.
Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.
In mathematics, the slow manifold of an equilibrium point of a dynamical system occurs as the most common example of a center manifold. One of the main methods of simplifying dynamical systems, is to reduce the dimension of the system to that of the slow manifold—center manifold theory rigorously justifies the modelling. For example, some global and regional models of the atmosphere or oceans resolve the so-called quasi-geostrophic flow dynamics on the slow manifold of the atmosphere/oceanic dynamics, and is thus crucial to forecasting with a climate model.
The theta model, or Ermentrout–Kopell canonical model, is a biological neuron model originally developed to mathematically describe neurons in the animal Aplysia. The model is particularly well-suited to describe neural bursting, which is characterized by periodic transitions between rapid oscillations in the membrane potential followed by quiescence. This bursting behavior is often found in neurons responsible for controlling and maintaining steady rhythms such as breathing, swimming, and digesting. Of the three main classes of bursting neurons, the theta model describes parabolic bursting, which is characterized by a parabolic frequency curve during each burst.
In biology exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.
A quantile-parameterized distribution (QPD) is a probability distributions that is directly parameterized by data. They were created to meet the need for easy-to-use continuous probability distributions flexible enough to represent a wide range of uncertainties, such as those commonly encountered in business, economics, engineering, and science. Because QPDs are directly parameterized by data, they have the practical advantage of avoiding the intermediate step of parameter estimation, a time-consuming process that typically requires non-linear iterative methods to estimate probability-distribution parameters from data. Some QPDs have virtually unlimited shape flexibility and closed-form moments as well.
{{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)