Simultaneous game

Last updated
Rock-paper-scissors is an example of a simultaneous game. Roshambo-Laos.jpg
Rock–paper–scissors is an example of a simultaneous game.

In game theory, a simultaneous game or static game [1] is a game where each player chooses their action without knowledge of the actions chosen by other players. [2] Simultaneous games contrast with sequential games, which are played by the players taking turns (moves alternate between players). In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. [3] Normal form representations are usually used for simultaneous games. [4] Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.

Contents

Characteristics

In sequential games, players observe what rivals have done in the past and there is a specific order of play. [5] However, in simultaneous games, all players select strategies without observing the choices of their rivals and players choose at the exact same time. [5]

A simple example is rock-paper-scissors in which all players make their choice at the exact same time. However moving at exactly the same time isn’t always taken literally, instead players may move without being able to see the choices of other players. [5] A simple example is an election in which not all voters will vote literally at the same time but each voter will vote not knowing what anyone else has chosen.


Given that decision makers are rational, then so is individual rationality. An outcome is individually rational if it yields each player at least his security level. [6] The security level for Player i is the amount max min Hi (s) that the player can guarantee themselves unilaterally, that is, without considering the actions of other players.

Representation

In a simultaneous game, players will make their moves simultaneously, determine the outcome of the game and receive their payoffs.

The most common representation of a simultaneous game is normal form (matrix form). For a 2 player game; one player selects a row and the other player selects a column at the exact same time. Traditionally, within a cell, the first entry is the payoff of the row player, the second entry is the payoff of the column player. The “cell” that is chosen is the outcome of the game. [4] To determine which "cell" is chosen, the payoffs for both the row player and the column player must be compared respectively. Each player is best off where their payoff is higher.

Rock–paper–scissors, a widely played hand game, is an example of a simultaneous game. Both players make a decision without knowledge of the opponent's decision, and reveal their hands at the same time. There are two players in this game and each of them has three different strategies to make their decision; the combination of strategy profiles (a complete set of each player's possible strategies) forms a 3×3 table. We will display Player 1's strategies as rows and Player 2's strategies as columns. In the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in rock-paper-scissors will look like this: [4]

Player 2

Player 1
RockPaperScissors
Rock
0
0
1
-1
-1
1
Paper
-1
1
0
0
1
-1
Scissors
1
-1
-1
1
0
0

Another common representation of a simultaneous game is extensive form (game tree). Information sets are used to emphasize the imperfect information. Although it is not simple, it is easier to use game trees for games with more than 2 players. [4]

Even though simultaneous games are typically represented in normal form, they can be represented using extensive form too. While in extensive form one player’s decision must be draw before that of the other, by definition such representation does not correspond to the real life timing of the players’ decisions in a simultaneous game. The key to modeling simultaneous games in the extensive form is to get the information sets right. A dashed line between nodes in extensive form representation of a game represents information asymmetry and specifies that, during the game, a party cannot distinguish between the nodes, [7] due to the party being unaware of the other party's decision (by definition of "simultaneous game").

The simultaneous game of rock-paper-scissors modeled in extensive form Simultaneous game.png
The simultaneous game of rock–paper–scissors modeled in extensive form

Some variants of chess that belong to this class of games include synchronous chess and parity chess. [8]

Bimatrix Game

In a simultaneous game, players only have one move and all players' moves are made simultaneously. The number of players in a game must be stipulated and all possible moves for each player must be listed. Each player may have different roles and options for moves. [9] However, each player has a finite number of options available to choose.

Two Players

An example of a simultaneous 2-player game:

A town has two companies, A and B, who currently make $8,000,000 each and need to determine whether they should advertise. The table below shows the payoff patterns; the rows are options of A and the columns are options of B. The entries are payoffs for A and B, respectively, separated by a comma. [9]

B advertisesB doesn’t advertise
A advertises2,25,1
A doesn’t advertise1,58,8

Two Players (zero sum)

A zero-sum game is when the sum of payoffs equals zero for any outcome i.e. the losers pay for the winners gains. For a zero-sum 2-player game the payoff of player A doesn’t have to be displayed since it is the negative of the payoff of player B. [9]

An example of a simultaneous zero-sum 2-player game:

Rock–paper–scissors is being played by two friends, A and B for $10. The first cell stands for a payoff of 0 for both players. The second cell is a payoff of 10 for A which has to be paid by B, therefore a payoff of -10 for B.

RockPaperScissors
Rock0−1010
Paper100−10
Scissors−10100

Three or more Players

An example of a simultaneous 3-player game:

A classroom vote is held as to whether or not they should have an increased amount of free time. Player A selects the matrix, player B selects the row, and player C selects the column. [9] The payoffs are:

A votes for extra free time
C votes for extra free timeC votes against extra free time
B votes for extra free time1,1,11,1,2
B votes against extra free time1,2,11,0,0
A votes against extra free time
C votes for extra free timeC votes against extra free time
B votes for extra free time2,1,10,1,0
B votes against extra free time0,0,10,0,0

Symmetric Games

All of the above examples have been symmetric. All players have the same options so if players interchange their moves, they also interchange their payoffs. By design, symmetric games are fair in which every player is given the same chances. [9]

Strategies - The Best Choice

Game theory should provide players with advice on how to find which move is best. These are known as “Best Response” strategies. [10]

Pure vs Mixed Strategy

Pure strategies are those in which players pick only one strategy from their best response. A Pure Strategy determines all your possible moves in a game, it is a complete plan for a player in a given game. Mixed strategies are those in which players randomize strategies in their best responses set. These have associated probabilities with each set of strategies. [10]

For simultaneous games, players will typically select mixed strategies while very occasionally choosing pure strategies. The reason for this is that in a game where players don’t know what the other one will choose it is best to pick the option that is likely to give the you the greatest benefit for the lowest risk given the other player could choose anything [10] i.e. if you pick your best option but the other player also picks their best option, someone will suffer.

Dominant vs Dominated Strategy

A dominant strategy provides a player with the highest possible payoff for any strategy of the other players. In simultaneous games, the best move a player can make is to follow their dominant strategy, if one exists. [11]

When analyzing a simultaneous game:

Firstly, identify any dominant strategies for all players. If each player has a dominant strategy, then players will play that strategy however if there is more than one dominant strategy then any of them are possible. [11]

Secondly, if there aren’t any dominant strategies, identify all strategies dominated by other strategies. Then eliminate the dominated strategies and the remaining are strategies players will play. [11]

Maximin Strategy

Some people always expect the worst and believe that others want to bring them down when in fact others want to maximise their payoffs. Still, nonetheless, player A will concentrate on their smallest possible payoff, believing this is what player A will get, they will choose the option with the highest value. This option is the maximin move (strategy), as it maximises the minimum possible payoff. Thus, the player can be assured a payoff of at least the maximin value, regardless of how the others are playing. The player doesn’t have the know the payoffs of the other players in order to choose the maximin move, therefore players can choose the maximin strategy in a simultaneous game regardless of what the other players choose. [10]

Nash Equilibrium

A pure Nash Equilibrium is when no one can gain a higher payoff by deviating from their move, provided others stick with their original choices. Nash equilibria are self-enforcing contracts, in which negotiation happens prior to the game being played in which each player best sticks with their negotiated move. In a Nash Equilibrium, each player is best responded to the choices of the other player. [11]

Prisoners dilemma Prisoner's Dilemma (8245308748).jpg
Prisoners dilemma

Prisoner's Dilemma

The prisoner’s dilemma originated with Merrill Flood and Melvin Dresher and is one of the most famous games in Game theory. The game is usually presented as follows:

Two members of a criminal gang have been apprehended by the police. Both individuals now sit in solitary confinement. The prosecutors have the evidence required to put both prisoners away on lesser charges. However, they do not possess the evidence required to convict the prisoners on their principle charges. The prosecution therefore simultaneously offers both prisoners a deal where they can choose to cooperate with one another by remaining silent, or they can choose betrayal, meaning they testify against their partner and receive a reduced sentence. It should be mentioned that the prisoners cannot communicate with one another. [12] Therefore, resulting in the following payoff matrix:

Prisoner B
Prisoner A
Prisoner B stays silent

(Cooperation)

Prisoner B Confess

(Betrayal)

Prisoner A stays silent

(Cooperation)

Each serves 1 YearPrisoner A: 3 Years

Prisoner B: 3 Months

Prisoner A Confess

(Betrayal)

Prisoner A: 3 Months

Prisoner B: 3 Years

Each serves 2 Years

This game results in a clear dominant strategy of betrayal where the only strong Nash Equilibrium is for both prisoners to confess. This is because we assume both prisoners to be rational and possessing no loyalty towards one another. Therefore, betrayal provides a greater reward for a majority of the potential outcomes. [12] If B cooperates, A should choose betrayal, as serving 3 months is better than serving 1 year. Moreover, if B chooses betrayal, then A should also choose betrayal as serving 2 years is better than serving 3. The choice to cooperate clearly provides a better outcome for the two prisoners however from a perspective of self interest this option would be deemed irrational. The aforementioned both cooperating option features the least total time spent in prison, serving 2 years total. This total is significantly less than the Nash Equilibrium total, where both cooperate, of 4 years. However, given the constraints that Prisoners A and B are individually motivated, they will always choose betrayal. They do so by selecting the best option for themselves while considering each possible decisions of the other prisoner.

Battle of the Sexes

In the battle of the sexes game, a wife and husband decide independently whether to go to a football game or the ballet. Each person likes to do something together with the other, but the husband prefers football and the wife prefers ballet. The two Nash equilibria, and therefore the best responses for both husband and wife, are for them to both pick the same leisure activity e.g. (ballet, ballet) or (football, football). [11] The table below shows the payoff for each option:

FootballBallet
Football3,21,1
Ballet0,02,3

Socially Desirable Outcomes

Vilfredo Pareto Italian sociologist and economist. Vilfredo Pareto 1870s2.jpg
Vilfredo Pareto Italian sociologist and economist.

Simultaneous games are designed to inform strategic choices in competitive and non cooperative environments. However, is important to note that Nash equilibria and many of the aforementioned strategies generally fail to result in socially desirable outcomes.

Pareto Optimality

Pareto efficiency is a notion rooted in the theoretical construct of perfect competition. Originating with Italian economist Vilfredo Pareto the concept refers to a state in which an economy has maximized efficiency in terms of resource allocation. Pareto Efficiency is closely linked to Pareto Optimality which is an ideal of Welfare Economics and often implies a notion of ethical consideration. A simultaneous game, for example, is said to reach Pareto optimality if there is no alternative outcome that can make at least one player better off while leaving all other players at least as well off. Therefore, these outcomes are referred to as socially desirable outcomes. [13]

The Stag Hunt

Stag hunt Stag Hunt hunters with stag and rabbits.svg
Stag hunt

The Stag Hunt by philosopher Jean-Jacques Rousseau is a simultaneous game in which there are two players. The decision to be made is whether or not each player wishes to hunt a Stag or a Hare. Naturally hunting a Stag will provide greater utility in comparison to hunting a Hare. However, in order to hunt a Stag both players need to work together. On the other hand, each player is perfectly capable of hunting a hare alone. The resulting dilemma is that neither player can be sure of what the other will choose to do. Thus, providing the potential for a player to receive no payoff should they be the only party to choose to hunt a Stag. [14] Therefore, resulting in the following payoff matrix:

Stag Hunt
StagHare
Stag3,30,1
Hare1,01,1

The game is designed to illustrate a clear Pareto optimality where both players cooperate to hunt a Stag. However, due to the inherent risk of the game, such an outcome does not always come to fruition. It is imperative to note that Pareto optimality is not a strategic solution for simultaneous games. However, the ideal informs players about the potential for more efficient outcomes. Moreover, potentially providing insight into how players should learn to play over time. [15]

See also

Related Research Articles

Game theory is the study of mathematical models of strategic interactions among rational agents. It has applications in many fields of social science, used extensively in economics as well as in logic, systems science and computer science. Traditional game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 21st century, game theory applies to a wider range of behavioral relations, and it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers.

Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two sides, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

A coordination game is a type of simultaneous game found in game theory. It describes the situation where a player will earn a higher payoff when they select the same course of action as another player. The game is not one of pure conflict, which results in multiple pure strategy Nash equilibria in which players choose matching strategies. Figure 1 shows a 2-player example.

Matching pennies is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

A non-cooperative game is a form of game under the topic of game theory. Non-cooperative games are used in situations where there are competition between the players of the game. In this model, there are no external rules that enforces the cooperation of the players therefore it is typically used to model a competitive environment. This is stated in various accounts most prominent being John Nash's paper.

In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. It would be much better for each hunter, acting individually, to give up total autonomy and minimal risk, which brings only the small reward of the hare. Instead, each hunter should separately choose the more ambitious and far more rewarding goal of getting the stag, thereby giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. Backward induction was first used in 1875 by Arthur Cayley, who discovered the method while trying to solve the Secretary problem.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, the outcome of a game is the ultimate result of a strategic interaction with one or more people, dependant on the choices made by all participants in a certain exchange. It represents the final payoff resulting from a set of actions that individuals can take within the context of the game. Outcomes are pivotal in determining the payoffs and expected utility for parties involved. Game theorists commonly study how the outcome of a game is determined and what factors affect it.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Quantum game theory is an extension of classical game theory to the quantum domain. It differs from classical game theory in three primary ways:

  1. Superposed initial states,
  2. Quantum entanglement of initial states,
  3. Superposition of strategies to be used on the initial states.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, the traveler's dilemma is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.

References

  1. Pepall, Lynne, 1952- (2014-01-28). Industrial organization : contemporary theory and empirical applications. Richards, Daniel Jay., Norman, George, 1946- (Fifth ed.). Hoboken, NJ. ISBN   978-1-118-25030-3. OCLC   788246625.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link)
  2. http://www-bcf.usc.edu The Path to Equilibrium in Sequential and Simultaneous Games (Brocas, Carrillo, Sachdeva; 2016).
  3. Managerial Economics: 3 edition. McGraw Hill Education (India) Private Limited. 2018. ISBN   978-93-87067-63-9.
  4. 1 2 3 4 Mailath, George J.; Samuelson, Larry; Swinkels, Jeroen M. (1993). "Extensive Form Reasoning in Normal Form Games". Econometrica. 61 (2): 273–302. doi:10.2307/2951552. ISSN   0012-9682. JSTOR   2951552. S2CID   9876487.
  5. 1 2 3 Sun, C., 2019. Simultaneous and Sequential Choice in a Symmetric Two‐Player Game with Canyon‐Shaped Payoffs. Japanese Economic Review, [online] Available at: <https://www.researchgate.net/publication/332377544_Simultaneous_and_Sequential_Choice_in_a_Symmetric_Two-Player_Game_with_Canyon-Shaped_Payoffs> [Accessed 30 October 2020].
  6. Vernengo, Matias; Caldentey, Esteban Perez; Rosser Jr, Barkley J, eds. (2020). U-M Weblogin. doi:10.1057/978-1-349-95121-5. ISBN   978-1-349-95121-5. S2CID   261084293 . Retrieved 2021-11-20.{{cite book}}: |website= ignored (help)
  7. 1 2 Watson, Joel. (2013-05-09). Strategy : an introduction to game theory (Third ed.). New York. ISBN   978-0-393-91838-0. OCLC   842323069.{{cite book}}: CS1 maint: location missing publisher (link)
  8. A V, Murali (2014-10-07). "Parity Chess". Blogger . Retrieved 2017-01-15.
  9. 1 2 3 4 5 Prisner, E., 2014. Game Theory Through Examples. Mathematical Association of America Inc. [online] Switzerland: The Mathematical Association of America, pp.25-30. Available at: <https://www.maa.org/sites/default/files/pdf/ebooks/GTE_sample.pdf> [Accessed 30 October 2020].
  10. 1 2 3 4 Ross, D., 2019. Game Theory. Stanford Encyclopedia of Philosophy, [online] pp.7-80. Available at: <https://plato.stanford.edu/entries/game-theory> [Accessed 30 October 2020].
  11. 1 2 3 4 5 Munoz-Garcia, F. and Toro-Gonzalez, D., 2016. Pure Strategy Nash Equilibrium and Simultaneous-Move Games with Complete Information. Strategy and Game Theory, [online] pp.25-60. Available at: <https://link.springer.com/chapter/10.1007/978-3-319-32963-5_2> [Accessed 30 October 2020].
  12. 1 2 M., Amadae, S. (2016). Prisoners of reason : game theory and neoliberal political economy. Cambridge University Press. ISBN   978-1-107-67119-5. OCLC   946968759.{{cite book}}: CS1 maint: multiple names: authors list (link)
  13. Berthonnet, Irène; Delclite, Thomas (2014-10-10), "Pareto-Optimality or Pareto-Efficiency: Same Concept, Different Names? An Analysis Over a Century of Economic Literature", A Research Annual, Emerald Group Publishing Limited, pp. 129–145, doi:10.1108/s0743-415420140000032005, ISBN   978-1-78441-154-1 , retrieved 2021-04-25
  14. Vanderschraaf, Peter (2016). "In a Weakly Dominated Strategy Is Strength: Evolution of Optimality in Stag Hunt Augmented with a Punishment Option". Philosophy of Science. 83 (1): 29–59. doi:10.1086/684166. ISSN   0031-8248. S2CID   124619436.
  15. Hao, Jianye; Leung, Ho-Fung (2013). "Achieving Socially Optimal Outcomes in Multiagent Systems with Reinforcement Social Learning". ACM Transactions on Autonomous and Adaptive Systems. 8 (3): 1–23. doi:10.1145/2517329. ISSN   1556-4665. S2CID   7496856.

Bibliography