Player 1"},"2":{"wt":"Player 2"}},"i":2}},"\n! width=\"50px\" Oo, Fo\n! width=\"50px\" Oo, Ff\n! width=\"50px\" Of, Fo\n! width=\"50px\" Of, Ff\n\n!O\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red3}} "},"2":{"wt":" {{blue2}} "},"3":{"wt":"transparent"}},"i":3}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red3}}"},"2":{"wt":" {{blue2}} "},"3":{"wt":"transparent"}},"i":4}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red0}}"},"2":{"wt":" {{blue0}} "},"3":{"wt":"transparent"}},"i":5}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red0}} "},"2":{"wt":" {{blue0}}"},"3":{"wt":"transparent"}},"i":6}},"\n\n!F\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red0}}"},"2":{"wt":" {{blue0}} "},"3":{"wt":"transparent"}},"i":7}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red2}} "},"2":{"wt":" {{blue3}}"},"3":{"wt":"transparent"}},"i":8}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red0}} "},"2":{"wt":" {{blue0}}"},"3":{"wt":"transparent"}},"i":9}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red2}} "},"2":{"wt":" {{blue3}} "},"3":{"wt":"transparent"}},"i":10}},"\n}\n",{"template":{"target":{"wt":"colbreak","href":"./Template:Colbreak"},"params":{"width":{"wt":"50%"}},"i":11}},"\n{ class=\"wikitable\" style=\"marginleft: auto; marginright: auto; border: none;\"\n+ align=\"bottom\"''Normal form with player 2 unaware of player 1's move''\n! width=\"90px\" ",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"
Player 1"},"2":{"wt":"Player 2"}},"i":12}},"\n! width=\"50px\" o\n! width=\"50px\" f\n\n!O\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red3}} "},"2":{"wt":" {{blue2}} "},"3":{"wt":"transparent"}},"i":13}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red0}} "},"2":{"wt":" {{blue0}}"},"3":{"wt":"transparent"}},"i":14}},"\n\n!F\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red0}}"},"2":{"wt":" {{blue0}} "},"3":{"wt":"transparent"}},"i":15}},"\n",{"template":{"target":{"wt":"diagonal split header","href":"./Template:Diagonal_split_header"},"params":{"1":{"wt":"{{red2}} "},"2":{"wt":" {{blue3}} "},"3":{"wt":"transparent"}},"i":16}},"\n}\n",{"template":{"target":{"wt":"colend","href":"./Template:Colend"},"params":{},"i":17}}]}" id="mwMA">


Minimax is a decision rule used in artificial intelligence, decision theory, game theory, statistics and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain. Originally formulated for twoplayer zerosum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decisionmaking in the presence of uncertainty.
In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a noncooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.
In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.
In game theory, a subgame is any part of a game that meets the following criteria :
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.
An extensiveform game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensiveform games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".
In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components  strategies and beliefs:
In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.
In game theory, a sequential game is a game where one player chooses their action before the others choose theirs. Importantly, the later players must have some information of the first's choice, otherwise the difference in time would have no strategic effect. Sequential games hence are governed by the time axis, and represented in the form of decision trees.
Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the secondtolast time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.
In game theory, strategic dominance occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.
The chainstore paradox is an apparent game theory paradox involving the chain store game, where a "deterrence strategy" appears optimal instead of the backward induction strategy of standard game theory reasoning.
In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of offtheequilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.
In game theory, folk theorems are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgameperfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgameperfect Nash equilibria rather than Nash equilibrium.
Sequential equilibrium is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.
In game theory, a Manipulated Nash equilibrium or MAPNASH is a refinement of subgame perfect equilibrium used in dynamic games of imperfect information. Informally, a strategy set is a MAPNASH of a game if it would be a subgame perfect equilibrium of the game if the game had perfect information. MAPNASH were first suggested by Amershi, Sadanand, and Sadanand (1988) and has been discussed in several papers since. It is a solution concept based on how players think about other players' thought processes.
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.
A noncredible threat is a term used in game theory and economics to describe a threat in a sequential game that a rational player would actually not carry out, because it would not be in his best interest to do so.
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Simultaneous games are often called static games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.
A Markov perfect equilibrium is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a payoff relevant state space can be readily identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. It has since been used, among else, in the analysis of industrial organization, macroeconomics and political economy.