Network games of incomplete information represent strategic network formation when agents do not know in advance their neighbors, i.e. the network structure and the value stemming from forming links with neighboring agents. In such a setting, agents have prior beliefs about the value of attaching to their neighbors; take their action based on their prior belief and update their belief based on the history of the game. [1] While games with a fully known network structure are widely applicable, there are many applications when players act without fully knowing with whom they interact or what their neighbors’ action will be. [2]
For example, people choosing major in college can be formalized as a network game with imperfect information: they might know something about the number of people taking that major and might infer something about the job market for different majors, but they don’t know with whom they will have to interact, thus they do not know the structure of the network. [3]
In this setting, [3] players have private and incomplete information about the network and this private information is interpreted as player's own type (here, private knowledge of own degree). Conditional on their own degree, players form beliefs about the degrees of their neighbors. The equilibrium concept of this game is Bayesian Nash Equilibrium.The strategy of a player is a mapping from the player's degree to the player's action.
Let be the probability that a player of degree d chooses action 1. For most degrees (d) the action will be either 0 or 1, but in some cases mixed strategy might occur.
The degrees of i's neighbor are drawn from a degree distribution , where approximates the distribution over a neighbors' degree from the configuration model with respect to a degree sequence represented by P.
Given , the probability that a neighbor takes action 1 is: .
Asymptotically, the belief that exactly m out of the d neighbors of player i choose action 1 follows a binomial distribution .
Thus, the expected utility of player i of degree who takes action is given by: , where is the payoff corresponding to a game played on a certain network structure, in which players choose their strategies knowing how many links they will have but not knowing which network will be realized, given the incomplete information about the link formation of neighbors.
Assuming independence of neighbors' degrees, the above formulation of the game does not require knowledge of the precise set of players. The network game is specified by defining a utility for each d and a distribution of neighbor's degrees .
The Bayesian equilibrium of this network game is a strategy such that for each d, if , then , and if , then .
Consider a network game of local provision of public good [4] when agent's actions are strategic substitutes, (i.e. the benefit of the individual from undertaking a certain action is not greater if his partners undertake the same action) thus, in the case of strategic substitutes, equilibrium actions are non-increasing in player's degrees.
Define a finite set of players or individuals, , connected in some network relationship.
The simplest framework is to think of an undirected network, where two agents are either connected or not.
Connections are represented by the adjacency matrix , with , implying that i's payoff is affected by j's behavior.
Conventionally, for all .
Define the set of neighbors of player as .
The number of connections of player , i.e. its degree is given by .
Every individual must choose independently an action in , where 1 indicates taking that action an 0 indicates not doing so.
Payoff is defined as , which is the sum of , the action chosen by agent i and the aggregate action in the neighborhood defined as .
The gross payoff to agent i is assumed to equal 1 if , and 0 otherwise. Providing the public good, i.e. choosing action 1 bears cost c, where , while action 0 bears no cost. The net payoff of the game is defined as gross payoff minus the cost c. Given the cost, an agent would prefer that someone in his or her neighborhood take action 1 and would rather not take the action himself/herself. If someone else in the neighborhood of i contributes, public good is provided and agent i is free-riding. However, if nobody in the neighborhood of i contributes, agent i would be willing to contribute and take action 1.
Under imperfect information (players form beliefs about the degrees of the neighbors, summarized by a probability distribution), a player's pure strategy can be defined as a mapping from degree k to action . Suppose that between any two of the N agents a link is formed independently with probability . The probability that any randomly selected neighbor is of degree k is the probability that the neighbor is connected to k-1 additional agents of the remaining N-2 agents and is given by:
.
If an agent of degree k chooses action 1 in equilibrium, it follows from degree independence (assuming that n is infinitely large) that an agent of degree k-1 faces a lower likelihood of an arbitrary neighbor choosing action 1 and would be best responding by choosing action 1 as well. It can be shown that any equilibrium is characterized by a threshold. Denote with t the smallest integer for which the public good will be provided:.
An equilibrium must satisfy for all , for all and . In particular, is non-increasing.
It can be seen that the underlying network structure and the relation between network connections and actions influence the outcome of the game. Social connections create personal advantages: players with degree greater than t obtain higher expected payoff as compared to the less connected players of degree less than t.
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.
In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.
In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if
In mathematics, the exterior algebra of a vector space V is a graded associative algebra
An Asian option is a special type of option contract. For Asian options, the payoff is determined by the average underlying price over some pre-set period of time. This is different from the case of the usual European option and American option, where the payoff of the option contract depends on the price of the underlying instrument at exercise; Asian options are thus one of the basic forms of exotic options. There are two types of Asian options: fixed strike, where averaging price is used in place of underlying price; and fixed price, where averaging price is used in place of strike.
In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.
In abstract algebra and multilinear algebra, a multilinear form on a vector space over a field is a map
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.
The work of a force on a particle along a virtual displacement is known as the virtual work.
In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
The Price of Anarchy (PoA) is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Let efficiency in this case mean the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.
A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.
In the geometry of numbers, the Klein polyhedron, named after Felix Klein, is used to generalize the concept of continued fractions to higher dimensions.
In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.
In mathematics, topological recursion is a recursive definition of invariants of spectral curves. It has applications in enumerative geometry, random matrix theory, mathematical physics, string theory, knot theory.
In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theory, but the quantity in the role taken by the variance in the classic CLT has a more complicated definition. See also the general form of Bienaymé's identity.
In differential geometry, a branch of mathematics, the Moser's trick is a method to relate two differential forms and on a smooth manifold by a diffeomorphism such that , provided that one can find a family of vector fields satisfying a certain ODE.