# Risk dominance

Last updated
Risk dominance
Payoff dominance
A solution concept in game theory
Relationship
Subset of Nash equilibrium
Significance
Proposed by John Harsanyi, Reinhard Selten
Used for Non-cooperative games
Example Stag hunt

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. 1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction (i.e. is less risky). This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, the Nash equilibrium, named after the late mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

## Contents

The payoff matrix in Figure 1 provides a simple two-player, two-strategy example of a game with two pure Nash equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in Figure 1 is a well-known game-theoretic dilemma called stag hunt. The rationale behind it is that communal action (hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend on coordinating with the other player. In addition, gathering alone is preferred to gathering in competition with others. Like the Prisoner's dilemma, it provides a reason why collective action might fail in the absence of credible commitments.

In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change.

In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

The prisoner's dilemma is a standard example of a game analyzed in game theory that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher while working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it "prisoner's dilemma", presenting it as follows:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The offer is:

 Hunt Gather Hunt 5, 5 0, 4 Gather 4, 0 2, 2 Fig. 1: Stag hunt example
 H G H A, a C, b G B, c D, d Fig. 2: Generic coordination game

## Formal definition

The game given in Figure 2 is a coordination game if the following payoff inequalities hold for player 1 (rows): A > B, D > C, and for player 2 (columns): a > b, d > c. The strategy pairs (H, H) and (G, G) are then the only pure Nash equilibria. In addition there is a mixed Nash equilibrium where player 1 plays H with probability p = (d-c)/(a-b-c+d) and G with probability 1–p; player 2 plays H with probability q = (D-C)/(A-B-C+D) and G with probability 1–q.

Strategy pair (H, H) payoff dominates (G, G) if A  D, a  d, and at least one of the two is a strict inequality: A > D or a > d.

Strategy pair (G, G) risk dominates (H, H) if the product of the deviation losses is highest for (G, G) (Harsanyi and Selten, 1988, Lemma 5.4.4). In other words, if the following inequality holds: (C – D)(c – d)≥(B – A)(b – a). If the inequality is strict then (G, G) strictly risk dominates (H, H). 2 (That is, players have more incentive to deviate).

If the game is symmetric, so if A = a, B = b, etc., the inequality allows for a simple interpretation: We assume the players are unsure about which strategy the opponent will pick and assign probabilities for each strategy. If each player assigns probabilities ½ to H and G each, then (G, G) risk dominates (H, H) if the expected payoff from playing G exceeds the expected payoff from playing H: ½ B + ½ D ≥ ½ A + ½ C, or simply B + D ≥ A + C.

Another way to calculate the risk dominant equilibrium is to calculate the risk factor for all equilibria and to find the equilibrium with the smallest risk factor. To calculate the risk factor in our 2x2 game, consider the expected payoff to a player if they play H: $E[\pi _{H}]=pA+(1-p)C$ (where p is the probability that the other player will play H), and compare it to the expected payoff if they play G: $E[\pi _{G}]=pB+(1-p)D$ . The value of p which makes these two expected values equal is the risk factor for the equilibrium (H, H), with $1-p$ the risk factor for playing (G, G). You can also calculate the risk factor for playing (G, G) by doing the same calculation, but setting p as the probability the other player will play G. An interpretation for p is it is the smallest probability that the opponent must play that strategy such that the person's own payoff from copying the opponent's strategy is greater than if the other strategy was played.

## Equilibrium selection

A number of evolutionary approaches have established that when played in a large population, players might fail to play the payoff dominant equilibrium strategy and instead end up in the payoff dominated, risk dominant equilibrium. Two separate evolutionary models both support the idea that the risk dominant equilibrium is more likely to occur. The first model, based on replicator dynamics, predicts that a population is more likely to adopt the risk dominant equilibrium than the payoff dominant equilibrium. The second model, based on best response strategy revision and mutation, predicts that the risk dominant state is the only stochastically stable equilibrium. Both models assume that multiple two-player games are played in a population of N players. The players are matched randomly with opponents, with each player having equal likelihoods of drawing any of the N1 other players. The players start with a pure strategy, G or H, and play this strategy against their opponent. In replicator dynamics, the population game is repeated in sequential generations where subpopulations change based on the success of their chosen strategies. In best response, players update their strategies to improve expected payoffs in the subsequent generations. The recognition of Kandori, Mailath & Rob (1993) and Young (1993) was that if the rule to update one's strategy allows for mutation 4 , and the probability of mutation vanishes, i.e. asymptotically reaches zero over time, the likelihood that the risk dominant equilibrium is reached goes to one, even if it is payoff dominated. 3 In biology, a mutation is the permanent alteration of the nucleotide sequence of the genome of an organism, virus, or extrachromosomal DNA or other genetic elements.

• ^1 A single Nash equilibrium is trivially payoff and risk dominant if it is the only NE in the game.
• ^2 Similar distinctions between strict and weak exist for most definitions here, but are not denoted explicitly unless necessary.
• ^3 Harsanyi and Selten (1988) propose that the payoff dominant equilibrium is the rational choice in the stag hunt game, however Harsanyi (1995) retracted this conclusion to take risk dominance as the relevant selection criterion.