Bayesian game

Last updated

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. [1] Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

Contents

Hungarian economist John C. Harsanyi introduced the concept of Bayesian games in three papers from 1967 and 1968: [2] [3] [4] He was awarded the Nobel Memorial Prize in Economic Sciences for these and other contributions to game theory in 1994. Roughly speaking, Harsanyi defined Bayesian games in the following way: players are assigned by nature at the start of the game a set of characteristics. By mapping probability distributions to these characteristics and by calculating the outcome of the game using Bayesian probability, the result is a game whose solution is, for technical reasons, far easier to calculate than a similar game in a non-Bayesian context. For those technical reasons, see the Specification of games section in this article.

Normal form games with incomplete information

Elements

A Bayesian game is defined by (N,A,T,p,u), where it consists of the following elements: [5]

  1. Set of players, N: The set of players within the game
  2. Action sets, ai: The set of actions available to Player i. An action profile a = (a1, . . . , aN) is a list of actions, one for each player
  3. Type sets, ti: The set of types of players i. "Types" capture the private information a player can have. A type profile t = (t1, . . . , tN) is a list of types, one for each player
  4. Payoff functions, u: Assign a payoff to a player given their type and the action profile. A payoff function, u= (u1, . . . , uN) denotes the utilities of player i
  5. Prior, p: A probability distribution over all possible type profiles, where p(t) = p(t1, . . . ,tN) is the probability that Player 1 has type t1 and Player N has type tN.

Pure strategies

In a strategic game, a pure strategy is a player's choice of action at each point where the player must make a decision. [6]

Three stages

There are three stages of Bayesian games, each describing the players' knowledge of types within the game.

  1. Ex-ante stage game. Players do not know their own types or those of other players. A player recognises payoffs as expected values based on a prior distribution of all possible types.
  2. Interim stage game. Players know their own type, but only a probability distribution of other players. A player studies the expected value of the other player's type when considering payoffs.
  3. Ex-post stage game. Players know their own types and those of other players. The payoffs are known to players. [7]

Improvements over non-Bayesian games

There are two important and novel aspects to Bayesian games that were themselves specified by Harsanyi. [8] The first is that Bayesian games should be considered and structured identically to complete information games. Except, by attaching probability to the game, the final game functions as though it were an incomplete information game. Therefore, players can be essentially modelled as having incomplete information and the probability space of the game still follows the law of total probability. Bayesian games are also useful in that they do not require infinite sequential calculations. Infinite sequential calculations would arise where players (essentially) try to "get into each other's heads". For example, one may ask questions and decide "If I expect some action from player B, then player B will anticipate that I expect that action, so then I should anticipate that anticipation" ad infinitum. Bayesian games allows for the calculation of these outcomes in one move by simultaneously assigning different probability weights to different outcomes. The effect of this is that Bayesian games allow for the modeling of a number of games that in a non-Bayesian setting would be irrational to compute.

Bayesian Nash equilibrium

A Bayesian-Nash Equilibrium of a Bayesian game is a Nash equilibrium of its associated ex-ante normal form game.

In a non-Bayesian game, a strategy profile is a Nash equilibrium if every strategy in that profile is a best response to every other strategy in the profile; i.e., there is no strategy that a player could play that would yield a higher payoff, given all the strategies played by the other players.

An analogous concept can be defined for a Bayesian game, the difference being that every player's strategy maximizes their expected payoff given their beliefs about the state of nature. A player's beliefs about the state of nature are formed by conditioning the prior probabilities on the player's own type according to Bayes' rule.

A Bayesian Nash equilibrium (BNE) is defined as a strategy profile that maximizes the expected payoff for each player given their beliefs and given the strategies played by the other players. That is, a strategy profile is a Bayesian Nash equilibrium if and only if for every player keeping the strategies of every other player fixed, strategy maximizes the expected payoff of player according to that player's beliefs. [5]

For finite Bayesian games, i.e., both the action and the type space are finite, there are two equivalent representations. The first is called the agent-form game (see Theorem 9.51 of the Game Theory book [9] ) which expands the number of players from to , i.e., every type of each player becomes a player. The second is called the induced normal form (see Section 6.3.3 of Multiagent Systems [10] ) which still has players yet expands the number of each player i's actions from to , i.e., the pure policy is a combination of actions the player should take for different types. Nash Equilibrium (NE) can be computed in these two equivalent representations, and the BNE can be recovered from the NE.

Extensive form games with incomplete information

Elements of extensive form games

Extensive form games with perfect or imperfect information, have the following elements: [12]

  1. Set of players
  2. Set of decision nodes
  3. A player function assigning a player to each decision node
  4. Set of actions for each player at each of her decision nodes
  5. Set of terminal nodes
  6. A payoff function for each player

Nature and information sets

Nature's node is usually denoted by an unfilled circle. Its strategy is always specified and always completely mixed. Usually, Nature is at the root of the tree, however Nature can move at other points as well.

An information set of player i is a subset of player i's decision nodes that she cannot distinguish between. That is, if player i is at one of her decision nodes in an information set, she does not know which node within the information set she is at.

For two decision nodes to be in the same information set, they must [13]

  1. Belong to the same player; and
  2. Have the same set of actions

Information sets are denoted by dotted lines, which is the most common notation today.

The role of beliefs

In Bayesian games, player's beliefs about the game are denoted by a probability distribution over various types.

If players do not have private information, the probability distribution over types is known as a common prior. [14]

Bayes' rule

An assessment of an extensive form game is a pair <b, μ>

  1. Behavior Strategy profile; and
  2. Belief system

An assessment <b, μ> satisfies Bayes' rule if [15] μ(x|hi) = Pr[x is reached given b−i ] / Σ Pr[x' is reached given b−i ] whenever hi is reached with strictly positive probability according to b−i.

Perfect Bayesian equilibrium

A perfect Bayesian equilibrium in an extensive form game is a combination of strategies and a specification of beliefs such that the following two conditions are satisfied: [16]

  1. Bayesian consistency: the beliefs are consistent with the strategies under consideration;
  2. Sequential rationality: the players choose optimally given their beliefs.

Bayesian Nash equilibrium can result in implausible equilibria in dynamic games, where players move sequentially rather than simultaneously. As in games of complete information, these can arise via non-credible strategies off the equilibrium path. In games of incomplete information there is also the additional possibility of non-credible beliefs.

To deal with these issues, Perfect Bayesian equilibrium, according to subgame perfect equilibrium requires that, starting from any information set, subsequent play be optimal. It requires that beliefs be updated consistently with Bayes' rule on every path of play that occurs with positive probability.

Stochastic Bayesian games

Stochastic Bayesian games [17] combine the definitions of Bayesian games and stochastic games, to represent environment states (e.g. physical world states) with stochastic transitions between states as well as uncertainty about the types of different players in each state. The resulting model is solved via a recursive combination of the Bayesian Nash equilibrium and the Bellman optimality equation. Stochastic Bayesian games have been used to address diverse problems, including defense and security planning, [18] cybersecurity of power plants, [19] autonomous driving, [20] mobile edge computing, [21] and self-stabilization in dynamic systems. [22]

Incomplete information over collective agency

The definition of Bayesian games and Bayesian equilibrium has been extended to deal with collective agency. One approach is to continue to treat individual players as reasoning in isolation, but to allow them, with some probability, to reason from the perspective of a collective. [23] Another approach is to assume that players within any collective agent know that the agent exists, but that other players do not know this, although they suspect it with some probability. [24] For example, Alice and Bob may sometimes optimize as individuals and sometimes collude as a team, depending on the state of nature, but other players may not know which of these is the case.

Example

Sheriff's dilemma

A sheriff faces an armed suspect. Both must simultaneously decide whether to shoot the other or not.

The suspect can either be of type "criminal" or type "civilian". The sheriff has only one type. The suspect knows its type and the Sheriff's type, but the Sheriff does not know the suspect's type. Thus, there is incomplete information (because the suspect has private information), making it a Bayesian game. There is a probability p that the suspect is a criminal, and a probability 1-p that the suspect is a civilian; both players are aware of this probability (common prior assumption, which can be converted into a complete-information game with imperfect information).

The sheriff would rather defend himself and shoot if the suspect shoots, or not shoot if the suspect does not (even if the suspect is a criminal). The suspect would rather shoot if he is a criminal, even if the sheriff does not shoot, but would rather not shoot if he is a civilian, even if the sheriff shoots. Thus, the payoff matrix of this Normal-form game for both players depends on the type of the suspect. This game is defined by (N,A,T,p,u), where:

Type = "Criminal"Sheriff's action
ShootNot
Suspect's actionShoot0, 02, -2
Not-2, -1-1,1
Type = "Civilian"Sheriff's action
ShootNot
Suspect's actionShoot-3, -1-1, -2
Not-2, -10, 0

If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. ad infinitumcommon knowledge), play in the game will be as follows according to perfect Bayesian equilibrium: [25] [26]

When the type is "criminal", the dominant strategy for the suspect is to shoot, and when the type is "civilian", the dominant strategy for the suspect is not to shoot; alternative strictly dominated strategy can thus be removed. Given this, if the sheriff shoots, he will have a payoff of 0 with probability p and a payoff of -1 with probability 1-p, i.e. an expected payoff of p-1; if the sheriff does not shoot, he will have a payoff of -2 with probability p and a payoff of 0 with probability 1-p, i.e. an expected payoff of -2p. Thus, the Sheriff will always shoot if p-1 > -2p, i.e. when p > 1/3.

The market for lemons

The Market for Lemons is related to a concept known as adverse selection.

Set up

There is a used car. Player 1 is a potential buyer who is interested in the car. Player 2 is the owner of the car and knows the value v of the car (how good it is, etc.). Player 1 does not and believes that the value v of the car to the owner (Player 2) is distributed uniformly between 0 and 100 (i.e., each of two value sub-intervals of [0, 100] of equal length are equally likely).

Player 1 can make a bid p between 0 and 100 (inclusive) I Player 2 can then accept or reject the offer. The payoffs as follows:

Side point: cut-off strategy

Player 2's strategy: Accept all bids above a certain cut-off P*, and Reject and bid below P*, is known as a cut-off strategy, where P* is called the cut-off.

Enter the monopolized market

A new company (player1) that wants to enter a market that is monopolised by a large company will encounter two types of monopolist (player2), type1 is prevented and type2 is allowed. Player1 will never have complete information about player2, but may be able to infer the probability of type1 and type2 appearing from whether the previous firm entering the market was blocked, it is a Bayesian game. The reason for these judgements is that there are blocking costs for player2, which may need to make significant price cuts to prevent player1 from entering the market, so it will block player1 when the profit it steals from entering the market is greater than the blocking costs.

See also

Related Research Articles

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

<span class="mw-page-title-main">John Harsanyi</span> Hungarian-American economist and philosopher (1920–2000)

John Charles Harsanyi was a Hungarian-American economist who spent most of his career at the University of California, Berkeley. He was the recipient of the Nobel Memorial Prize in Economic Sciences in 1994.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

<span class="mw-page-title-main">Signaling game</span> Game class in game theory

In game theory, a signaling game is a simple type of a dynamic Bayesian game.

In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, an extensive-form game is a specification of a game allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.

In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions, payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

Sequential equilibrium is a refinement of Nash equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

<span class="mw-page-title-main">Jean-François Mertens</span> Belgian game theorist (1946–2012)

Jean-François Mertens was a Belgian game theorist and mathematical economist.

Construction by Jean-François Mertens and Zamir implementing with John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types.

Network games of incomplete information represent strategic network formation when agents do not know in advance their neighbors, i.e. the network structure and the value stemming from forming links with neighboring agents. In such a setting, agents have prior beliefs about the value of attaching to their neighbors; take their action based on their prior belief and update their belief based on the history of the game. While games with a fully known network structure are widely applicable, there are many applications when players act without fully knowing with whom they interact or what their neighbors’ action will be.

References

  1. Zamir, Shmuel (2009). "Bayesian Games: Games with Incomplete Information" (PDF). Encyclopedia of Complexity and Systems Science: 426. doi:10.1007/978-0-387-30440-3_29. ISBN   978-0-387-75888-6. S2CID   14218591.
  2. Harsanyi, John C., 1967/1968. "Games with Incomplete Information Played by Bayesian Players, I-III." Management Science 14 (3): 159-183 (Part I), 14 (5): 320-334 (Part II), 14 (7): 486-502 (Part III).
  3. Harsanyi, John C. (1968). "Games with Incomplete Information Played by "Bayesian" Players, I-III. Part II. Bayesian Equilibrium Points". Management Science. 14 (5): 320–334. doi:10.1287/mnsc.14.5.320. ISSN   0025-1909. JSTOR   2628673.
  4. Harsanyi, John C. (1968). "Games with Incomplete Information Played by "Bayesian" Players, I-III. Part III. The Basic Probability Distribution of the Game". Management Science. 14 (7): 486–502. doi:10.1287/mnsc.14.7.486. ISSN   0025-1909. JSTOR   2628894.
  5. 1 2 Kajii, A.; Morris, S. (1997). "The Robustness of Equilibria to Incomplete Information". Econometrica. 65 (6): 1283–1309. doi:10.2307/2171737. JSTOR   2171737.
  6. Grüne-Yanoff, Till; Lehtinen, Aki (2012). "Philosophy of Game Theory". Philosophy of Economics: 532.
  7. Koniorczyk, Mátyás; Bodor, András; Pintér, Miklós (29 June 2020). "Ex ante versus ex post equilibria in classical Bayesian games with a nonlocal resource". Physical Review A. 1 (6): 2–3. arXiv: 2005.12727 . Bibcode:2020PhRvA.101f2115K. doi:10.1103/PhysRevA.101.062115. S2CID   218889282.
  8. Harsanyi, John C. (2004). "Games with Incomplete Information Played by "Bayesian" Players, I-III: Part I. The Basic Model". Management Science. 50 (12): 1804–1817. doi:10.1287/mnsc.1040.0270. ISSN   0025-1909. JSTOR   30046151.
  9. Maschler, Michael; Solan, Eilon; Zamir, Shmuel (2013). Game Theory. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511794216. ISBN   978-0-511-79421-6.
  10. Shoham, Yoav; Leyton-Brown, Kevin (2008). Multiagent Systems. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511811654. ISBN   978-0-511-81165-4.
  11. Ponssard, J. -P.; Sorin, S. (June 1980). "The LP formulation of finite zero-sum games with incomplete information". International Journal of Game Theory. 9 (2): 99–105. doi:10.1007/bf01769767. ISSN   0020-7276. S2CID   120632621.
  12. Narahari, Y (July 2012). "Extensive Form Games" (PDF). Department of Computer Science and Automation: 1.
  13. "Strategic-form games", Game Theory, Cambridge University Press, pp. 75–143, 2013-03-21, doi:10.1017/cbo9780511794216.005, ISBN   9780511794216 , retrieved 2023-04-23
  14. Zamir, Shmuel (2009). "Bayesian Games: Games with Incomplete Information" (PDF). Encyclopedia of Complexity and Systems Science: 119. doi:10.1007/978-0-387-30440-3_29. ISBN   978-0-387-75888-6. S2CID   14218591.
  15. "Bayes' rule: a tutorial introduction to Bayesian analysis". Choice Reviews Online. 51 (6): 51–3301–51-3301. 2014-01-21. doi:10.5860/choice.51-3301. ISSN   0009-4978.
  16. Peters, Hans (2015). Game Theory. Springer Texts in Business and Economics. Berlin: Springer. p. 60. doi:10.1007/978-3-662-46950-7. ISBN   978-3-662-46949-1.
  17. Albrecht, Stefano; Crandall, Jacob; Ramamoorthy, Subramanian (2016). "Belief and Truth in Hypothesised Behaviours". Artificial Intelligence . 235: 63–94. arXiv: 1507.07688 . doi:10.1016/j.artint.2016.02.004. S2CID   2599762.
  18. Caballero, William N.; Banks, David; Wu, Keru (2022-08-08). "Defense and security planning under resource uncertainty and multi-period commitments". Naval Research Logistics (NRL). 69 (7): 1009–1026. doi:10.1002/nav.22071. ISSN   0894-069X. S2CID   251461541.
  19. Maccarone, Lee Tylor (2021). Stochastic Bayesian Games for the Cybersecurity of Nuclear Power Plants. PhD Dissertation, University of Pittsburgh.
  20. Bernhard, Julian; Pollok, Stefan; Knoll, Alois (2019). "Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for Automated Driving using Distributional Reinforcement Learning". 2019 IEEE Intelligent Vehicles Symposium (IV). Paris, France: IEEE. pp. 2148–2155. arXiv: 2102.03119 . doi:10.1109/IVS.2019.8813791. ISBN   978-1-7281-0560-4. S2CID   201811314.
  21. Asheralieva, Alia; Niyato, Dusit (2021). "Fast and Secure Computational Offloading With Lagrange Coded Mobile Edge Computing". IEEE Transactions on Vehicular Technology. 70 (5): 4924–4942. doi:10.1109/TVT.2021.3070723. ISSN   0018-9545. S2CID   234331661.
  22. Ramtin, Amir Reza; Towsley, Don (2021). "A Game-Theoretic Approach to Self-Stabilization with Selfish Agents". arXiv: 2108.07362 [cs.DC].
  23. Bacharach, M. (1999). "Interactive team reasoning: A contribution to the theory of cooperation". Research in Economics. 53 (2): 117–47. doi:10.1006/reec.1999.0188.
  24. Newton, J. (2019). "Agency equilibrium". Games. 10 (1): 14. doi: 10.3390/g10010014 . hdl: 10419/219237 .
  25. "Coursera". Coursera. Retrieved 2016-06-16.
  26. Hu, Yuhuang; Loo, Chu Kiong (2014-03-17). "A Generalized Quantum-Inspired Decision Making Model for Intelligent Agent". The Scientific World Journal. 2014: 240983. doi: 10.1155/2014/240983 . ISSN   1537-744X. PMC   3977121 . PMID   24778580.
  27. Akerlof, George A. (August 1970). "The Market for "Lemons": Quality Uncertainty and the Market Mechanism". The Quarterly Journal of Economics. 84 (3): 488–500. doi:10.2307/1879431. JSTOR   1879431.

Further reading