Quantal response equilibrium | |
---|---|

A solution concept in game theory | |

Relationship | |

Superset of | Nash equilibrium, Logit equilibrium |

Significance | |

Proposed by | Richard McKelvey and Thomas Palfrey |

Used for | Non-cooperative games |

Example | Traveler's dilemma |

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey,^{ [1] }^{ [2] } it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

- Application to data
- Logit equilibrium
- For dynamic games
- Applications
- Critiques
- Non-falsifiability
- See also
- References

In a quantal response equilibrium, players are assumed to make errors in choosing which pure strategy to play. The probability of any particular strategy being chosen is positively related to the payoff from that strategy. In other words, very costly errors are unlikely.

The equilibrium arises from the realization of beliefs. A player's payoffs are computed based on beliefs about other players' probability distribution over strategies. In equilibrium, a player's beliefs are correct.

When analyzing data from the play of actual games, particularly from laboratory experiments, particularly from experiments with the matching pennies game, Nash equilibrium can be unforgiving. Any non-equilibrium move can appear equally "wrong", but realistically should not be used to reject a theory. QRE allows every strategy to be played with non-zero probability, and so any data is possible (though not necessarily reasonable).

The most common specification for QRE is **logit equilibrium** (**LQRE**). In a logit equilibrium, player's strategies are chosen according to the probability distribution:

is the probability of player choosing strategy . is the expected utility to player of choosing strategy under the belief that other players are playing according to the probability distribution . Note that the "belief" density in the expected payoff on the right side must match the choice density on the left side. Thus computing expectations of observable quantities such as payoff, demand, output, etc., requires finding fixed points as in mean field theory.^{ [3] }

Of particular interest in the logit model is the non-negative parameter λ (sometimes written as 1/μ). λ can be thought of as the rationality parameter. As λ→0, players become "completely non-rational", and play each strategy with equal probability. As λ→∞, players become "perfectly rational", and play approaches a Nash equilibrium. In a non-mean-field variant of QRE, the Gibbs measure is the resulting form of the equilibrium measure, and this parameter λ is in fact the inverse of the temperature of the system which quantifies the degree of random noise in decisions.^{ [4] }

For dynamic (extensive form) games, McKelvey and Palfrey defined **agent quantal response equilibrium** (**AQRE**). AQRE is somewhat analogous to subgame perfection. In an AQRE, each player plays with some error as in QRE. At a given decision node, the player determines the expected payoff of each action by treating their future self as an independent player with a known probability distribution over actions. As in QRE, in an AQRE every strategy is used with nonzero probability.

The quantal response equilibrium approach has been applied in various settings. For example, Goeree et al. (2002) study overbidding in private-value auctions,^{ [5] } Yi (2005) explores behavior in ultimatum games,^{ [6] } Hoppe and Schmitz (2013) study the role of social preferences in principal-agent problems,^{ [7] } and Kawagoe et al. (2018) investigate step-level public goods games with binary decisions.^{ [8] }. Vernon L. Smith and Michael J. Campbell have used a variant to model the effects of human sociability in economic interactions.^{ [4] } There, for a particular model, the purely rational Nash equilibrium is shown to have *no* predictive power, and the boundedly rational Gibbs equilibrium must be used to predict phenomena outlined in *Humanomics*.^{ [9] }

Work by Haile et al. has shown that QRE is not falsifiable in any normal form game, even with significant a priori restrictions on payoff perturbations.^{ [10] } The authors argue that the LQRE concept can sometimes restrict the set of possible outcomes from a game, but may be insufficient to provide a powerful test of behavior without a priori restrictions on payoff perturbations.

However the authors say "this should not be mistaken for a critique of the QRE notion itself. Rather, our aim has been to clarify some limitations of examining behavior one game at a time and to develop approaches for more informative evaluation of QRE." This "non-falsifiability" is a result of showing multiple probability distributions for player strategies may be consistent with expected values from QRE, and that more conditions, such as requiring identically distributed and independent perturbations, are needed to guarantee a unique probability distribution for individual behavior such as a logit distribution. This is essentially the same as the refinement problem when multiple Nash equilibria occur.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, the **centipede game**, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a **Perfect Bayesian Equilibrium** (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - *strategies* and *beliefs*:

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, **trembling hand perfect equilibrium** is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or **tremble,** may choose unintended strategies, albeit with negligible probability.

In game theory, **folk theorems** are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a **correlated equilibrium** is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.

In game theory, the **purification theorem** was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a **stochastic game**, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with **probabilistic transitions** played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some **state**. The players select actions and each player receives a **payoff** that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

In game theory, the **traveler's dilemma** is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.

In game theory a **Poisson game** is a game with a random number of players, where the distribution of the number of players follows a Poisson random process. An extension of games of imperfect information, Poisson games have mostly seen application to models of voting.

**Jean-François Mertens** was a Belgian game theorist and mathematical economist.

**M equilibrium** is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

- ↑ McKelvey, Richard; Palfrey, Thomas (1995). "Quantal Response Equilibria for Normal Form Games".
*Games and Economic Behavior*.**10**: 6–38. CiteSeerX 10.1.1.30.5152 . doi:10.1006/game.1995.1023. - ↑ McKelvey, Richard; Palfrey, Thomas (1998). "Quantal Response Equilibria for Extensive Form Games" (PDF).
*Experimental Economics*.**1**: 9–41. doi:10.1007/BF01426213. - ↑ Anderson, Simon P.; Goeree, Jacob K.; Holt, Charles A. (2004). "Noisy Directional Learning and the Logit Equilibrium".
*The Scandinavian Journal of Economics*.**106**(3): 581–602. CiteSeerX 10.1.1.81.8574 . doi:10.1111/j.0347-0520.2004.00378.x. - 1 2 Michael J. Campbell; Vernon L. Smith (2020). "An elementary humanomics approach to boundedly rational quadratic models".
*Physica A*.**562**: 125309. doi:10.1016/j.physa.2020.125309. - ↑ Goeree, Jacob K.; Holt, Charles A.; Palfrey, Thomas R. (2002). "Quantal Response Equilibrium and Overbidding in Private-Value Auctions" (PDF).
*Journal of Economic Theory*.**104**(1): 247–272. doi:10.1006/jeth.2001.2914. ISSN 0022-0531. - ↑ Yi, Kang-Oh (2005). "Quantal-response equilibrium models of the ultimatum bargaining game".
*Games and Economic Behavior*.**51**(2): 324–348. doi:10.1016/s0899-8256(03)00051-4. ISSN 0899-8256. - ↑ Hoppe, Eva I.; Schmitz, Patrick W. (2013). "Contracting under Incomplete Information and Social Preferences: An Experimental Study".
*Review of Economic Studies*.**80**(4): 1516–1544. doi:10.1093/restud/rdt010. - ↑ Kawagoe, Toshiji; Matsubae, Taisuke; Takizawa, Hirokazu (2018). "Quantal response equilibria in a generalized Volunteer's Dilemma and step-level public goods games with binary decision".
*Evolutionary and Institutional Economics Review*.**15**(1): 11–23. doi:10.1007/s40844-017-0081-6. ISSN 1349-4961. - ↑ Vernon L. Smith and Bart J. Wilson (2019).
*Humanomics: Moral Sentiments and the Wealth of Nations for the Twenty-First Century*. Cambridge University Press. doi:10.1017/9781108185561. ISBN 9781108185561. - ↑ Haile, Philip A.; Hortaçsu, Ali; Kosenok, Grigory (2008). "On the Empirical Content of Quantal Response Equilibrium".
*American Economic Review*.**98**(1): 180–200. CiteSeerX 10.1.1.193.7715 . doi:10.1257/aer.98.1.180.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.