Graphical game theory

Last updated

In game theory, the common ways to describe a game are the normal form and the extensive form. The graphical form is an alternate compact representation of a game using the interaction among participants.

Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".

Contents

Consider a game with players with strategies each. We will represent the players as nodes in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility function depends on fewer other players, the graphical representation would be smaller.

Formal definition

A graphical game is represented by a graph , in which each player is represented by a node, and there is an edge between two nodes and iff their utility functions are depended on the strategy which the other player will choose. Each node in has a function , where is the degree of vertex . specifies the utility of player as a function of his strategy as well as those of his neighbors.

The size of the game's representation

For a general players game, in which each player has possible strategies, the size of a normal form representation would be . The size of the graphical representation for this game is where is the maximal node degree in the graph. If , then the graphical game representation is much smaller.

An example

In case where each player's utility function depends only on one other player:

The maximal degree of the graph is 1, and the game can be described as functions (tables) of size . So, the total size of the input will be .

Nash equilibrium

Finding Nash equilibrium in a game takes exponential time in the size of the representation. If the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the maximal degree of a node is 3 or more, the problem is NP-complete.

Further reading

Vijay Vazirani American theoretical computer scientist

Vijay Virkumar Vazirani is an Indian American distinguished professor of computer science in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine.

Noam Nisan is an Israeli computer scientist, a professor of computer science at the Hebrew University of Jerusalem. He is known for his research in computational complexity theory and algorithmic game theory.

Timothy Avelin Roughgarden is a professor of Computer Science at Columbia University. Tim received his Ph.D. at Cornell University in 2002, under the supervision of Éva Tardos.

Related Research Articles

Minimax is a decision rule used in artificial intelligence, decision theory, game theory, statistics and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain. Originally formulated for two-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph or planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points.

In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all participants value each unit of cake equally.

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

Breadth-first search algorithm for searching the nodes of a graph in order by their hop count from a starting node

Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the tree root, and explores all of the neighbor nodes at the present depth prior to moving on to the nodes at the next depth level.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

Maximal independent set

In graph theory, a maximal independent set (MIS) or maximal stable set is an independent set that is not a subset of any other independent set. In other words, there is no vertex outside the independent set that may join it because it is maximal with respect to the independent set property.

In game theory, a Bayesian game is a game in which the players have incomplete information on the other players, but, they have beliefs with known probability distribution.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.

Parity game

A parity game is played on a colored directed graph, where each node has been colored by a priority – one of (usually) finitely many natural numbers. Two players, 0 and 1, move a token along the edges of the graph. The owner of the node that the token falls on selects the successor node, resulting in a path, called the play.

The Price of Anarchy (PoA) is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Let efficiency in this case mean the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In game theory, the price of stability (PoS) of a game is the ratio between the best objective function value of one of its equilibria and that of an optimal outcome. The PoS is relevant for games in which there is some objective authority that can influence the players a bit, and maybe help them converge to a good Nash equilibrium. When measuring how efficient a Nash equilibrium is in a specific game we often time also talk about the price of anarchy (PoA).

Congestion games are a class of games in game theory first proposed by Rosenthal in 1973. In a Congestion game we define players and resources, where the payoff of each player depends on the resources it chooses and the number of players choosing the same resource. Congestion games are a special case of potential games. Rosenthal proved that any congestion game is a potential game and Monderer and Shapley (1996) proved the converse: for any potential game, there is a congestion game with the same potential function.

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

Strategic Network Formation defines how and why networks take particular forms. In many networks, the relation between nodes is determined by the choice of the participating players involved, not by an arbitrary rule. A “strategic” modeling of network requires defining a network’s costs and benefits and predicts how individual preferences become outcomes.