Mutual knowledge

Last updated

Mutual knowledge in game theory is information known by all participatory agents. However, unlike common knowledge, a related topic, mutual knowledge does not require that all agents are aware that this knowledge is mutual. [1] All common knowledge is mutual knowledge, but not all mutual knowledge is common knowledge. Mutual knowledge can arise accidentally, due to a failure to design the game properly, so all players independently discover this mutual knowledge, or deliberately, due to the expected course of the game.

The difference between mutual knowledge and common knowledge

The difference is crucial in a co-operation game. For example, in the game depicted below, with a random event determining the payoff matrix, both players, being fully rational, presume the more likely option to have occurred. However, suppose each player separately finds out that the random number, which was created privately and which determines the payoff matrix, was 1. However, neither are told that the other player is also aware of this.

Player A presumes Player B is not aware the random number is 1. They then observe that if the random number is 2-100, the best choice for B is always b2. So they choose a2, which would give them the best possible payoff in this matrix. Symmetrically, Player B presumes Player A expects the random numbers 2-100 and chooses a1, so B chooses b2. As a result, the players had a final result of (1, a2, b2), with a payoff of 1 for both - the lowest possible payoff (total or individual).

Now suppose that it is common knowledge that the random number is 1 - that is, both players are also aware that the other player knows the random number is 1, in addition to knowing this themselves. Given this, the best choice for A is a1, with an average of 6.5, and the best choice for B is b1, with an average of 6.5 also, giving an outcome of (1, a1, b1) with a payoff of 8 for both - the highest possible total payoff.

Common knowledge tends to lead to co-operative behavior more often than purely mutual knowledge, which can often lead to anti-cooperative behavior as shown in the example above, as the participants are aware that the knowledge is mutual knowledge and can all decide on behalf of this knowledge. [2] This works best in a symmetric game, like the left matrix below.

The payoff matrices
A gameRandom number: 1Random number: 2-100
Payoff (A, B)A chooses option a1A chooses option a2A chooses option a1A chooses option a2
B chooses option b18,810,53,42,3
B chooses option b25,101,14,33,2

Bibliography

  1. Vanderschraaf, Peter; Sillari, Giacomo (2014-01-01). Zalta, Edward N. (ed.). Common Knowledge (Spring 2014 ed.).
  2. Thomas, Kyle A.; DeScioli, Peter; Haque, Omar Sultan; Pinker, Steven (2014). "The psychology of coordination and common knowledge". Journal of Personality and Social Psychology. 107 (4): 657–676. CiteSeerX   10.1.1.705.3016 . doi:10.1037/a0037037. PMID   25111301. S2CID   3267194.

Related Research Articles

Minmax is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two sides, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.

In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it.

A coordination game is a type of simultaneous game found in game theory. It describes the situation where a player will earn a higher payoff when they select the same course of action as another player. The game is not one of pure conflict, which results in multiple pure strategy Nash equilibria in which players choose matching strategies. Figure 1 shows a 2-player example.

In mathematics, the axiom of determinacy is a possible axiom for set theory introduced by Jan Mycielski and Hugo Steinhaus in 1962. It refers to certain two-person topological games of length ω. AD states that every game of a certain type is determined; that is, one of the two players has a winning strategy.

In game theory, cheap talk is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signalling, in which sending certain messages may be costly for the sender depending on the state of the world.

In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. But both hunters would be better off if both choose the more ambitious and more rewarding goal of getting the stag, giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.

The chain store paradox is an apparent game theory paradox describing the decisions a chain store might make, where a "deterrence strategy" appears optimal instead of the backward induction strategy of standard game theory reasoning.

<span class="mw-page-title-main">Join (topology)</span>

In topology, a field of mathematics, the join of two topological spaces and , often denoted by or , is a topological space formed by taking the disjoint union of the two spaces, and attaching line segments joining every point in to every point in . The join of a space with itself is denoted by . The join is defined in slightly different ways in different contexts

The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states:

In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

<span class="mw-page-title-main">Simultaneous game</span>

In game theory, a simultaneous game or static game is a game where each player chooses their action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.

In mathematics, Cartan's lemma refers to a number of results named after either Élie Cartan or his son Henri Cartan:

The European Language Certificates are international standardised tests of ten languages.

Institutional complementarity refers to situations of interdependence among institutions. This concept is frequently used to explain the degree of institutional diversity that can be observed across and within socio-economic systems, and its consequences on economic performance. In particular, the concept of institutional complementarity has been used to illustrate why institutions are resistant to change and why introducing new institutions into a system often leads to unintended, sometimes suboptimal, consequences.

Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2×2 games. SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game's payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa.