Consider a game of three players, I,II and III, facing, respectively, the strategies {T,B}, {L,R}, and {l,r}. Without further constraints, 3*2^{3}=24 utility values would be required to describe such a game.  
L, l  L, r  R, l  R, r  

T  4, 6, 2  5, 5, 5  8, 1, 7  1, 4, 9 
B  8, 6, 6  7, 4, 7  9, 6, 5  0, 3, 0 
For each strategy profile, the utility of the first player is listed first (red), and is followed by the utilities of the second player (green) and the third player (blue). 
In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n^{ [1] } (a formal definition, describing succinct games as a computational problem, is given by Papadimitriou & Roughgarden 2008^{ [2] }).
Say that each player's utility depends only on his own action and the action of one other player  for instance, I depends on II, II on III and III on I. Representing such a game would require only three 2x2 utility tables, containing in all only 12 utility values.

Graphical games are games in which the utilities of each player depends on the actions of very few other players. If is the greatest number of players by whose actions any single player is affected (that is, it is the indegree of the game graph), the number of utility values needed to describe the game is , which, for a small is a considerable improvement.
It has been shown that any normal form game is reducible to a graphical game with all degrees bounded by three and with two strategies for each player.^{ [3] } Unlike normal form games, the problem of finding a pure Nash equilibrium in graphical games (if one exists) is NPcomplete.^{ [4] } The problem of finding a (possibly mixed) Nash equilibrium in a graphical game is PPADcomplete.^{ [5] } Finding a correlated equilibrium of a graphical game can be done in polynomial time, and for a graph with a bounded treewidth, this is also true for finding an optimal correlated equilibrium.^{ [2] }
When most of the utilities are 0, as below, it is easy to come up with a succinct representation.  
L, l  L, r  R, l  R, r  

T  0, 0, 0  2, 0, 1  0, 0, 0  0, 7, 0 
B  0, 0, 0  0, 0, 0  2, 0, 3  0, 0, 0 
Sparse games are those where most of the utilities are zero. Graphical games may be seen as a special case of sparse games.
For a two player game, a sparse game may be defined as a game in which each row and column of the two payoff (utility) matrices has at most a constant number of nonzero entries. It has been shown that finding a Nash equilibrium in such a sparse game is PPADhard, and that there does not exist a fully polynomialtime approximation scheme unless PPAD is in P.^{ [6] }
Suppose all three players are identical (we'll color them all purple), and face the strategy set {T,B}. Let #TP and #BP be the number of a player's peers who've chosen T and B, respectively. Describing this game requires only 6 utility values.

In symmetric games all players are identical, so in evaluating the utility of a combination of strategies, all that matters is how many of the players play each of the strategies. Thus, describing such a game requires giving only utility values.
In a symmetric game with 2 strategies there always exists a pure Nash equilibrium – although a symmetric pure Nash equilibrium may not exist.^{ [7] } The problem of finding a pure Nash equilibrium in a symmetric game (with possibly more than two players) with a constant number of actions is in AC^{0}; however, when the number of actions grows with the number of players (even linearly) the problem is NPcomplete.^{ [8] } In any symmetric game there exists a symmetric equilibrium. Given a symmetric game of n players facing k strategies, a symmetric equilibrium may be found in polynomial time if k=.^{ [9] } Finding a correlated equilibrium in symmetric games may be done in polynomial time.^{ [2] }
If players were different but did not distinguish between other players we would need to list 18 utility values to represent the game  one table such as that given for "symmetric games" above for each player.

In anonymous games, players have different utilities but do not distinguish between other players (for instance, having to choose between "go to cinema" and "go to bar" while caring only how crowded will each place be, not who'll they meet there). In such a game a player's utility again depends on how many of his peers choose which strategy, and his own, so utility values are required.
If the number of actions grows with the number of players, finding a pure Nash equilibrium in an anonymous game is NPhard.^{ [8] } An optimal correlated equilibrium of an anonymous game may be found in polynomial time.^{ [2] } When the number of strategies is 2, there is a known PTAS for finding an εapproximate Nash equilibrium.^{ [10] }
If the game in question was a polymatrix game, describing it would require 24 utility values. For simplicity, let us examine only the utilities of player I (we would need two more such tables for each of the other players).
If strategy profile (B,R,l) was chosen, player I's utility would be 9+8=17, player II's utility would be 1+2=3, and player III's utility would be 6+4=10. 
In a polymatrix game (also known as a multimatrix game), there is a utility matrix for every pair of players (i,j), denoting a component of player i's utility. Player i's final utility is the sum of all such components. The number of utilities values required to represent such a game is .
Polymatrix games always have at least one mixed Nash equilibrium.^{ [11] } The problem of finding a Nash equilibrium in a polymatrix game is PPADcomplete.^{ [5] } Moreover, the problem of finding a constant approximate Nash equilibrium in a polymatrix game is also PPADcomplete.^{ [12] } Finding a correlated equilibrium of a polymatrix game can be done in polynomial time.^{ [2] } Note that even if pairwise games played between players have pure Nash equilibria, the global interaction does not necessarily admit a pure Nash equilibrium (although a mixed Nash equilibrium must exist).
Competitive polymatrix games with only zerosum interactions between players are a generalization of twoplayer zerosum games. The Minimax theorem originally formulated for twoplayer games by von Neumann generalizes to zerosum polymatrix games ^{ [13] }. Same as twoplayer zerosum games, polymatrix zerosum games have mixed Nash equilibria that can be computed in polynomial time and those equilibria coincide with correlated equilibria. But some other properties of twoplayer zerosum games do not generalize. Notably, players need not have a unique value of the game and equilibrium strategies are not maxmin strategies in a sense that worstcase payoffs of players are not maximized when using an equilibrium strategy.
Polymatrix games which have coordination games on their edges are potential games ^{ [14] } and can be solved using a potential function method.
Let us now equate the players' various strategies with the Boolean values "0" and "1", and let X stand for player I's choice, Y for player II's choice and Z for player III's choice. Let us assign each player a circuit: Player I: X ∧ (Y ∨ Z) These describe the utility table below.  
0, 0  0, 1  1, 0  1, 1  

0  0, 0, 0  0, 1, 0  0, 1, 1  0, 0, 1 
1  0, 1, 1  1, 0, 1  1, 0, 1  1, 1, 1 
The most flexible of way of representing a succinct game is by representing each player by a polynomialtime bounded Turing machine, which takes as its input the actions of all players and outputs the player's utility. Such a Turing machine is equivalent to a Boolean circuit, and it is this representation, known as circuit games, that we will consider.
Computing the value of a 2player zerosum circuit game is an EXPcomplete problem,^{ [15] } and approximating the value of such a game up to a multiplicative factor is known to be in PSPACE.^{ [16] } Determining whether a pure Nash equilibrium exists is a complete problem (see Polynomial hierarchy).^{ [17] }
Many other types of succinct game exist (many having to do with allocation of resources). Examples include congestion games, network congestion games, scheduling games, local effect games, facility location games, actiongraph games, hypergraphical games and more.
Below is a table of some known complexity results for finding certain classes of equilibria in several game representations. "NE" stands for "Nash equilibrium", and "CE" for "correlated equilibrium". n is the number of players and s is the number of strategies each player faces (we're assuming all players face the same number of strategies). In graphical games, d is the maximum indegree of the game graph. For references, see main article text.
Representation  Size (O(...))  Pure NE  Mixed NE  CE  Optimal CE 

Normal form game  NPcomplete  PPADcomplete  P  P  
Graphical game  NPcomplete  PPADcomplete  P  NPhard  
Symmetric game  NPcomplete  The computation of symmetric Nash equilibrium is PPADhard for two players. The computation of nonsymmetric Nash equilibrium for two players is NPcomplete.  P  P  
Anonymous game  NPhard  P  P  
Polymatrix game  PPADcomplete (polynomial for zerosum polymatrix)  P  NPhard  
Circuit game  complete  
Congestion game  PLScomplete  P  NPhard 
In game theory and economic theory, a zerosum game is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a larger piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zerosum game if all participants value each unit of cake equally.
In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a noncooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.
Kuhn poker is an extremely simplified form of poker developed by Harold W. Kuhn as a simple model zerosum twoplayer imperfectinformation game, amenable to a complete gametheoretic analysis. In Kuhn poker, the deck includes only three playing cards, for example a King, Queen, and Jack. One card is dealt to each player, which may place bets similarly to a standard poker. If both players bet or both players pass, the player with the higher card wins, otherwise, the betting player wins.
Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.
In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgameperfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgameperfect Nash equilibria rather than Nash equilibria.
In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the wellstudied 2person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for nonrepeated games.
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.
In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.
In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts nonzero weight on, yet he mixes them so as to make every other player also indifferent.
In computer science, PPAD is a complexity class introduced by Christos Papadimitriou in 1994. PPAD is a subclass of TFNP based on functions that can be shown to be total by a parity argument. The class attracted significant attention in the field of algorithmic game theory because it contains the problem of computing a Nash equilibrium: this problem was shown to be complete for PPAD by Daskalakis, Goldberg and Papadimitriou with at least 3 players and later extended by Chen and Deng to 2 players.
In game theory, an epsilonequilibrium, or nearNash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
Algorithmic game theory is an area in the intersection of game theory and computer science, with the objective of understanding and design of algorithms in strategic environments.
The Price of Anarchy (PoA) is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Let efficiency in this case mean the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.
A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tictactoe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.
In game theory, the common ways to describe a game are the normal form and the extensive form. The graphical form is an alternate compact representation of a game using the interaction among participants.
Constantinos Daskalakis is a Greek theoretical computer scientist. He is a professor at MIT's Electrical Engineering and Computer Science department and a member of the MIT Computer Science and Artificial Intelligence Laboratory. He was awarded the Rolf Nevanlinna Prize and the Grace Murray Hopper Award in 2018.
Congestion games are a class of games in game theory first proposed by American economist Robert W. Rosenthal in 1973. In a congestion game the payoff of each player depends on the resources it chooses and the number of players choosing the same resource. Congestion games are a special case of potential games. Rosenthal proved that any congestion game is a potential game and Monderer and Shapley (1996) proved the converse: for any potential game, there is a congestion game with the same potential function.
Approximate Competitive Equilibrium from Equal Incomes (ACEEI) is a procedure for fair item assignment. It was developed by Eric Budish.
The Price of Anarchy (PoA) is a concept in game theory and mechanism design that measures how the social welfare of a system degrades due to selfish behavior of its agents. It has been studied extensively in various contexts, particularly in auctions.