Payoff dominance"},"subsetof":{"wt":"[[Nash equilibrium]]"},"supersetof":{"wt":""},"discoverer":{"wt":"[[John Harsanyi]], [[Reinhard SeltenReinhard Selten]]"},"usedfor":{"wt":"[[Noncooperative game]]s"},"example":{"wt":"[[Stag hunt]]"}},"i":0}}]}" id="mwAg">
Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.^{ 1 } When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction (i.e. is less risky). This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
In game theory, the Nash equilibrium, named after the late mathematician John Forbes Nash Jr., is a proposed solution of a noncooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.
Game theory is the study of mathematical models of strategic interaction between rational decisionmakers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zerosum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.
The payoff matrix in Figure 1 provides a simple twoplayer, twostrategy example of a game with two pure Nash equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in Figure 1 is a wellknown gametheoretic dilemma called stag hunt. The rationale behind it is that communal action (hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend on coordinating with the other player. In addition, gathering alone is preferred to gathering in competition with others. Like the Prisoner's dilemma, it provides a reason why collective action might fail in the absence of credible commitments.
In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". JeanJacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change.
In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.
The prisoner's dilemma is a standard example of a game analyzed in game theory that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher while working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it "prisoner's dilemma", presenting it as follows:
Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The offer is:


The game given in Figure 2 is a coordination game if the following payoff inequalities hold for player 1 (rows): A > B, D > C, and for player 2 (columns): a > b, d > c. The strategy pairs (H, H) and (G, G) are then the only pure Nash equilibria. In addition there is a mixed Nash equilibrium where player 1 plays H with probability p = (dc)/(abc+d) and G with probability 1–p; player 2 plays H with probability q = (DC)/(ABC+D) and G with probability 1–q.
Strategy pair (H, H) payoff dominates (G, G) if A ≥ D, a ≥ d, and at least one of the two is a strict inequality: A > D or a > d.
Strategy pair (G, G) risk dominates (H, H) if the product of the deviation losses is highest for (G, G) (Harsanyi and Selten, 1988, Lemma 5.4.4). In other words, if the following inequality holds: (C – D)(c – d)≥(B – A)(b – a). If the inequality is strict then (G, G) strictly risk dominates (H, H).^{ 2 }(That is, players have more incentive to deviate).
If the game is symmetric, so if A = a, B = b, etc., the inequality allows for a simple interpretation: We assume the players are unsure about which strategy the opponent will pick and assign probabilities for each strategy. If each player assigns probabilities ½ to H and G each, then (G, G) risk dominates (H, H) if the expected payoff from playing G exceeds the expected payoff from playing H: ½ B + ½ D ≥ ½ A + ½ C, or simply B + D ≥ A + C.
Another way to calculate the risk dominant equilibrium is to calculate the risk factor for all equilibria and to find the equilibrium with the smallest risk factor. To calculate the risk factor in our 2x2 game, consider the expected payoff to a player if they play H: (where p is the probability that the other player will play H), and compare it to the expected payoff if they play G: . The value of p which makes these two expected values equal is the risk factor for the equilibrium (H, H), with the risk factor for playing (G, G). You can also calculate the risk factor for playing (G, G) by doing the same calculation, but setting p as the probability the other player will play G. An interpretation for p is it is the smallest probability that the opponent must play that strategy such that the person's own payoff from copying the opponent's strategy is greater than if the other strategy was played.
A number of evolutionary approaches have established that when played in a large population, players might fail to play the payoff dominant equilibrium strategy and instead end up in the payoff dominated, risk dominant equilibrium. Two separate evolutionary models both support the idea that the risk dominant equilibrium is more likely to occur. The first model, based on replicator dynamics, predicts that a population is more likely to adopt the risk dominant equilibrium than the payoff dominant equilibrium. The second model, based on best response strategy revision and mutation, predicts that the risk dominant state is the only stochastically stable equilibrium. Both models assume that multiple twoplayer games are played in a population of N players. The players are matched randomly with opponents, with each player having equal likelihoods of drawing any of the N−1 other players. The players start with a pure strategy, G or H, and play this strategy against their opponent. In replicator dynamics, the population game is repeated in sequential generations where subpopulations change based on the success of their chosen strategies. In best response, players update their strategies to improve expected payoffs in the subsequent generations. The recognition of Kandori, Mailath & Rob (1993) and Young (1993) was that if the rule to update one's strategy allows for mutation^{ 4 }, and the probability of mutation vanishes, i.e. asymptotically reaches zero over time, the likelihood that the risk dominant equilibrium is reached goes to one, even if it is payoff dominated.^{ 3 }
In biology, a mutation is the permanent alteration of the nucleotide sequence of the genome of an organism, virus, or extrachromosomal DNA or other genetic elements.
The game of chicken, also known as the hawk–dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while it is to both players’ benefit if one player yields, the other player's optimal choice depends on what their opponent is doing: if the player opponent yields, they should not, but if the opponent fails to yield, the player should.
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's bestknown contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.
Matching pennies is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.
In game theory, a player's strategy is any of the options which he or she chooses in a setting where the outcome depends not only on their own actions but on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.
In game theory, battle of the sexes (BoS) is a twoplayer coordination game. Imagine a couple that agreed to meet this evening, but cannot recall if they will be attending the opera or a football game. The husband would prefer to go to the football game. The wife would rather go to the opera. Both would prefer to go to the same place rather than different ones. If they cannot communicate, where should they go?
In game theory, a Bayesian game is a game in which the players have incomplete information on the other players, but, they have beliefs with known probability distribution.
In game theory, strategic dominance occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.
In game theory, rationalizability is a solution concept. The general idea is to provide the weakest constraints on players while still requiring that players are rational and this rationality is common knowledge among the players. It is more permissive than Nash equilibrium. Both require that players respond optimally to some belief about their opponents' actions, but Nash equilibrium requires that these beliefs be correct while rationalizability does not. Rationalizability was first defined, independently, by Bernheim (1984) and Pearce (1984).
In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of offtheequilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.
In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts nonzero weight on, yet he mixes them so as to make every other player also indifferent.
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game has a subgame perfect equilibrium.
Equilibrium selection is a concept from game theory which seeks to address reasons for players of a game to select a certain equilibrium over another. The concept is especially relevant in evolutionary game theory, where the different methods of equilibrium selection respond to different ideas of what equilibria will be stable and persistent for one player to play even in the face of deviations of the other players. This is important because there are various equilibrium concepts, and for many particular concepts, such as the Nash equilibrium, many games have multiple equilibria.
Proper equilibrium is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.
In game theory, an epsilonequilibrium, or nearNash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
The Lemke–Howson algorithm is an algorithm that computes a Nash equilibrium of a bimatrix game, named after its inventors, Carlton E. Lemke and J. T. Howson. It is said to be “the best known among the combinatorial algorithms for finding a Nash equilibrium”.