Strategic dominance

Last updated
Dominant strategy
A solution concept in game theory
Relationship
Subset of Strategy (game theory)
Superset of Rationalizable strategy
Significance
Used for Prisoner's dilemma

In game theory, a dominant strategy is a strategy that is better than any other strategy for one player, no matter how that player's opponent will play. Some very simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.

Contents

Terminology

When a player tries to choose the "best" strategy among a multitude of options, that player may compare two strategies A and B to see which one is better. The result of the comparison is one of:

This notion can be generalized beyond the comparison of two strategies.

Strategy: A complete contingent plan for a player in the game. A complete contingent plan is a full specification of a player's behavior, describing each action a player would take at every possible decision point. Because information sets represent points in a game where a player must make a decision, a player's strategy describes what that player will do at each information set. [2]

Rationality: The assumption that each player acts in a way that is designed to bring about what he or she most prefers given probabilities of various outcomes; von Neumann and Morgenstern showed that if these preferences satisfy certain conditions, this is mathematically equivalent to maximizing a payoff. A straightforward example of maximizing payoff is that of monetary gain, but for the purpose of a game theory analysis, this payoff can take any desired outcome. E.g., cash reward, minimization of exertion or discomfort, promoting justice, or amassing overall “utility” - the assumption of rationality states that players will always act in the way that best satisfies their ordering from best to worst of various possible outcomes. [2]

Common Knowledge: The assumption that each player has knowledge of the game, knows the rules and payoffs associated with each course of action, and realizes that every other player has this same level of understanding. This is the premise that allows a player to make a value judgment on the actions of another player, backed by the assumption of rationality, into consideration when selecting an action. [2]

Dominance and Nash equilibria

CD
C1, 10, 0
D0, 00, 0

If a strictly dominant strategy exists for one player in a game, that player will play that strategy in each of the game's Nash equilibria. If both players have a strictly dominant strategy, the game has only one unique Nash equilibrium, referred to as a "dominant strategy equilibrium". However, that Nash equilibrium is not necessarily "efficient", meaning that there may be non-equilibrium outcomes of the game that would be better for both players. The classic game used to illustrate this is the Prisoner's Dilemma.

Strictly dominated strategies cannot be a part of a Nash equilibrium, and as such, it is irrational for any player to play them. On the other hand, weakly dominated strategies may be part of Nash equilibria. For instance, consider the payoff matrix pictured at the right.

Strategy C weakly dominates strategy D. Consider playing C: If one's opponent plays C, one gets 1; if one's opponent plays D, one gets 0. Compare this to D, where one gets 0 regardless. Since in one case, one does better by playing C instead of D and never does worse, C weakly dominates D. Despite this, is a Nash equilibrium. Suppose both players choose D. Neither player will do any better by unilaterally deviatingif a player switches to playing C, they will still get 0. This satisfies the requirements of a Nash equilibrium. Suppose both players choose C. Neither player will do better by unilaterally deviating—if a player switches to playing D, they will get 0. This also satisfies the requirements of a Nash equilibrium.

Iterated elimination of strictly dominated strategies

The iterated elimination (or deletion, or removal) of dominated strategies (also denominated as IESDS, or IDSDS, or IRSDS) is one common technique for solving games that involves iteratively removing dominated strategies. In the first step, at most one dominated strategy is removed from the strategy space of each of the players since no rational player would ever play these strategies. This results in a new, smaller game. Some strategiesthat were not dominated beforemay be dominated in the smaller game. The first step is repeated, creating a new even smaller game, and so on. The process stops when no dominated strategy is found for any player. This process is valid since it is assumed that rationality among players is common knowledge, that is, each player knows that the rest of the players are rational, and each player knows that the rest of the players know that he knows that the rest of the players are rational, and so on ad infinitum (see Aumann, 1976).

See also

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.

The prisoner's dilemma is a game theory thought experiment that involves two rational agents, each of whom can cooperate for mutual benefit or betray their partner ("defect") for individual reward. This dilemma was originally framed by Merrill Flood and Melvin Dresher in 1950 while they worked at the RAND Corporation. Albert W. Tucker later formalized the game by structuring the rewards in terms of prison sentences and named it the "prisoner's dilemma".

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

A coordination game is a type of simultaneous game found in game theory. It describes the situation where a player will earn a higher payoff when they select the same course of action as another player. The game is not one of pure conflict, which results in multiple pure strategy Nash equilibria in which players choose matching strategies. Figure 1 shows a 2-player example.

In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, an extensive-form game is a specification of a game allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

Rationalizability is a solution concept in game theory. It is the most permissive possible solution concept that still requires both players to be at least somewhat rational and know the other players are also rational, i.e. that they do not play dominated strategies. A strategy is rationalizable if there exists some possible set of beliefs both players could have about each other's actions, that would still result in the strategy being played.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

In game theory, the outcome of a game is the ultimate result of a strategic interaction with one or more people, dependant on the choices made by all participants in a certain exchange. It represents the final payoff resulting from a set of actions that individuals can take within the context of the game. Outcomes are pivotal in determining the payoffs and expected utility for parties involved. Game theorists commonly study how the outcome of a game is determined and what factors affect it.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, the traveler's dilemma is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.

<span class="mw-page-title-main">Simultaneous game</span>

In game theory, a simultaneous game or static game is a game where each player chooses their action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.

References

  1. Leyton-Brown, Kevin; Shoham, Yoav (January 2008). "Essentials of Game Theory: A Concise Multidisciplinary Introduction". Synthesis Lectures on Artificial Intelligence and Machine Learning. 2 (1): 36. doi:10.2200/S00108ED1V01Y200802AIM003.
  2. 1 2 3 Joel, Watson (2013-05-09). Strategy: An Introduction to Game Theory (Third ed.). New York. ISBN   9780393918380. OCLC   842323069.{{cite book}}: CS1 maint: location missing publisher (link)
This article incorporates material from Dominant strategy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.