Strategic dominance

Last updated
Dominant strategy
Solution concept in game theory
Relationship
Subset of Strategy (game theory)
Superset of Rationalizable strategy
Significance
Used for Prisoner's dilemma

In game theory, a dominant strategy is a strategy that is better than any other strategy for a player, no matter how that player's opponent or opponents play. Strategies that are dominated by another strategy can be eliminated from consideration, as they can be strictly improved upon. Some very simple games can be solved using dominance.

Contents

Terminology

A player can compare two strategies, A and B, to determine which one is better. The result of the comparison is one of:

This notion can be generalized beyond the comparison of two strategies.

Strategy: A complete contingent plan for a player in the game. A complete contingent plan is a full specification of a player's behavior, describing each action a player would take at every possible decision point. Because information sets represent points in a game where a player must make a decision, a player's strategy describes what that player will do at each information set. [2]

Rationality: The assumption that each player acts in a way that is designed to bring about what he or she most prefers given probabilities of various outcomes; von Neumann and Morgenstern showed that if these preferences satisfy certain conditions, this is mathematically equivalent to maximizing a payoff. A straightforward example of maximizing payoff is that of monetary gain, but for the purpose of a game theory analysis, this payoff can take any desired outcome—cash reward, minimization of exertion or discomfort, or promoting justice can all be modeled as amassing an overall “utility” for the player. The assumption of rationality states that players will always act in the way that best satisfies their ordering from best to worst of various possible outcomes. [2]

Common Knowledge: The assumption that each player has knowledge of the game, knows the rules and payoffs associated with each course of action, and realizes that every other player has this same level of understanding. This is the premise that allows a player to make a value judgment on the actions of another player, backed by the assumption of rationality, into consideration when selecting an action. [2]

Dominance and Nash equilibria

CD
C1, 10, 0
D0, 00, 0

If a strictly dominant strategy exists for one player in a game, that player will play that strategy in each of the game's Nash equilibria. If both players have a strictly dominant strategy, the game has only one unique Nash equilibrium, referred to as a "dominant strategy equilibrium". However, that Nash equilibrium is not necessarily "efficient", meaning that there may be non-equilibrium outcomes of the game that would be better for both players. The classic game used to illustrate this is the Prisoner's Dilemma.

Strictly dominated strategies cannot be a part of a Nash equilibrium, and as such, it is irrational for any player to play them. On the other hand, weakly dominated strategies may be part of Nash equilibria. For instance, consider the payoff matrix pictured at the right.

Strategy C weakly dominates strategy D. Consider playing C: If one's opponent plays C, one gets 1; if one's opponent plays D, one gets 0. Compare this to D, where one gets 0 regardless. Since in one case, one does better by playing C instead of D and never does worse, C weakly dominates D. Despite this, is a Nash equilibrium. Suppose both players choose D. Neither player will do any better by unilaterally deviatingif a player switches to playing C, they will still get 0. This satisfies the requirements of a Nash equilibrium. Suppose both players choose C. Neither player will do better by unilaterally deviating—if a player switches to playing D, they will get 0. This also satisfies the requirements of a Nash equilibrium.

Iterated elimination of strictly dominated strategies

The iterated elimination (or deletion, or removal) of dominated strategies (also denominated as IESDS, or IDSDS, or IRSDS) is one common technique for solving games that involves iteratively removing dominated strategies. In the first step, all dominated strategies are removed from the strategy space of each of the players, since no rational player would ever play these strategies. This results in a new, smaller game. Some strategiesthat were not dominated beforemay be dominated in the smaller game. The first step is repeated, creating a new even smaller game, and so on.

This process is valid since it is assumed that rationality among players is common knowledge, that is, each player knows that the rest of the players are rational, and each player knows that the rest of the players know that he knows that the rest of the players are rational, and so on ad infinitum (see Aumann, 1976).

See also

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

A coordination game is a type of simultaneous game found in game theory. It describes the situation where a player will earn a higher payoff when they select the same course of action as another player. The game is not one of pure conflict, which results in multiple pure strategy Nash equilibria in which players choose matching strategies. Figure 1 shows a 2-player example.

In game theory, a move, action, or play is any one of the options which a player can choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, an extensive-form game is a specification of a game allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players may hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

Rationalizability is a solution concept in game theory. It is the most permissive possible solution concept that still requires both players to be at least somewhat rational and know the other players are also somewhat rational, i.e. that they do not play dominated strategies. A strategy is rationalizable if there exists some possible set of beliefs both players could have about each other's actions, that would still result in the strategy being played.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

In game theory, the outcome of a game is the ultimate result of a strategic interaction with one or more people, dependant on the choices made by all participants in a certain exchange. It represents the final payoff resulting from a set of actions that individuals can take within the context of the game. Outcomes are pivotal in determining the payoffs and expected utility for parties involved. Game theorists commonly study how the outcome of a game is determined and what factors affect it.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, the traveler's dilemma is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.

<span class="mw-page-title-main">Simultaneous game</span>

In game theory, a simultaneous game or static game is a game where each player chooses their action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.

References

  1. Leyton-Brown, Kevin; Shoham, Yoav (January 2008). "Essentials of Game Theory: A Concise Multidisciplinary Introduction". Synthesis Lectures on Artificial Intelligence and Machine Learning. 2 (1): 36. doi:10.2200/S00108ED1V01Y200802AIM003.
  2. 1 2 3 Joel, Watson (2013-05-09). Strategy: An Introduction to Game Theory (Third ed.). New York. ISBN   9780393918380. OCLC   842323069.{{cite book}}: CS1 maint: location missing publisher (link)
This article incorporates material from Dominant strategy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.