Price of anarchy

Last updated

The Price of Anarchy (PoA) [1] is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Here, efficiency means the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.

Contents

Usually the system is modeled as a game and the efficiency is some function of the outcomes (e.g. maximum delay in a network, congestion in a transportation system, social welfare in an auction, etc.). Different concepts of equilibrium can be used to model the selfish behavior of the agents, among which the most common is the Nash equilibrium. Different flavors of Nash equilibrium lead to variations of the notion of Price of Anarchy as Pure Price of Anarchy (for deterministic equilibria), Mixed Price of Anarchy (for randomized equilibria), and Bayes–Nash Price of Anarchy (for games with incomplete information). Solution concepts other than Nash equilibrium lead to variations such as the Price of Sinking. [2]

The term Price of Anarchy was first used by Elias Koutsoupias and Christos Papadimitriou, [1] [3] but the idea of measuring inefficiency of equilibrium is older. [4] The concept in its current form was designed to be the analogue of the 'approximation ratio' in an approximation algorithm or the 'competitive ratio' in an online algorithm. This is in the context of the current trend of analyzing games using algorithmic lenses (algorithmic game theory).

Mathematical definition

Consider a game , defined by a set of players , strategy sets for each player and utilities (where also called set of outcomes). We can define a measure of efficiency of each outcome which we call welfare function . Natural candidates include the sum of players utilities (utilitarian objective) minimum utility (fairness or egalitarian objective) ..., or any function that is meaningful for the particular game being analyzed and is desirable to be maximized.

We can define a subset to be the set of strategies in equilibrium (for example, the set of Nash equilibria). The Price of Anarchy is then defined as the ratio between the optimal 'centralized' solution and 'worst equilibrium':

If, instead of a 'welfare' which we want to 'maximize', the function measure efficiency is a 'cost function' which we want to 'minimize' (e.g. delay in a network) we use (following the convention in approximation algorithms):

A related notion is that of the Price of Stability (PoS) which measures the ratio between the optimal 'centralized' solution and the 'best equilibrium':

or in the case of cost functions:

We know that by the definition. It is expected that the loss in efficiency due to game-theoretical constraints is somewhere between 'PoS' and 'PoA'.

Examples

Prisoner's dilemma

Consider the 2x2 game called prisoner's dilemma, given by the following cost matrix:

CooperateDefect
Cooperate1, 17, 0
Defect0, 75, 5

and let the cost function be Now, the worst (and only) Nash Equilibrium would be when both players defect and the resulting cost is . However, the highest social welfare occurs when both cooperate, in which case the cost is . Thus the PoA of this game will be .

Since the game has a unique Nash equilibrium, the PoS is equal to the PoA and it is 5 too.

Job scheduling

A more natural example is the one of job scheduling. There are players and each of them has a job to run. They can choose one of machines to run the job. The Price of Anarchy compares the situation where the selection of machines is guided/directed centrally to the situation where each player chooses the machine that will make its job run fastest.

Each machine has a speed Each job has a weight A player picks a machine to run his or her job on. So, the strategies of each player are Define the load on machine to be:

The cost for player is i.e., the load of the machine they chose. We consider the egalitarian cost function , here called the makespan.

We consider two concepts of equilibrium: pure Nash and mixed Nash. It should be clear that mixed PoA ≥ pure PoA, because any pure Nash equilibrium is also a mixed Nash equilibrium (this inequality can be strict: e.g. when , , , and , the mixed strategies achieve an average makespan of 1.5, while any pure-strategy PoA in this setting is ). First we need to argue that there exist pure Nash equilibria.

Claim. For each job scheduling game, there exists at least one pure-strategy Nash equilibrium.

Proof. We would like to take a socially optimal action profile . This would mean simply an action profile whose makespan is minimum. However, this will not be enough. There may be several such action profiles leading to a variety of different loads distributions (all having the same maximum load). Among these, we further restrict ourselves to one that has a minimum second-largest load. Again, this results in a set of possible load distributions, and we repeat until the th-largest (i.e., smallest) load, where there can only be one distribution of loads (unique up to permutation). This would also be called the lexicographic smallest sorted load vector.

We claim that this is a pure-strategy Nash equilibrium. Reasoning by contradiction, suppose that some player could strictly improve by moving from machine to machine . This means that the increased load of machine after the move is still smaller than the load of machine before the move. As the load of machine must decrease as a result of the move and no other machine is affected, this means that the new configuration is guaranteed to have reduced the th-largest (or higher ranked) load in the distribution. This, however, violates the assumed lexicographic minimality of .Q.E.D.

Claim. For each job scheduling game, the pure PoA is at most .

Proof. It is easy to upper-bound the welfare obtained at any mixed-strategy Nash equilibrium by

Consider, for clarity of exposition, any pure-strategy action profile : clearly

Since the above holds for the social optimum as well, comparing the ratios and proves the claim. Q.E.D

Selfish Routing

Braess's paradox

Braess paradox road example.svg

Consider a road network as shown in the adjacent diagram on which 4000 drivers wish to travel from point Start to End. The travel time in minutes on the Start–A road is the number of travelers (T) divided by 100, and on Start–B is a constant 45 minutes (likewise with the roads across from them). If the dashed road does not exist (so the traffic network has 4 roads in total), the time needed to drive Start–A–End route with drivers would be . The time needed to drive the Start–B–End route with drivers would be . As there are 4000 drivers, the fact that can be used to derive the fact that when the system is at equilibrium. Therefore, each route takes minutes. If either route took less time, it would not be a Nash equilibrium: a rational driver would switch from the longer route to the shorter route.

Now suppose the dashed line A–B is a road with an extremely short travel time of approximately 0 minutes. Suppose that the road is opened and one driver tries Start–A–B–End. To his surprise he finds that his time is minutes, a saving of almost 25 minutes. Soon, more of the 4000 drivers are trying this new route. The time taken rises from 40.01 and keeps climbing. When the number of drivers trying the new route reaches 2500, with 1500 still in the Start–B–End route, their time will be minutes, which is no improvement over the original route. Meanwhile, those 1500 drivers have been slowed to minutes, a 20-minute increase. They are obliged to switch to the new route via A too, so it now takes minutes. Nobody has any incentive to travel A-End or Start-B because any driver trying them will take 85 minutes. Thus, the opening of the cross route triggers an irreversible change to it by everyone, costing everyone 80 minutes instead of the original 65. If every driver were to agree not to use the A–B path, or if that route were closed, every driver would benefit by a 15-minute reduction in travel time.

Generalized routing problem

The routing problem introduced in the Braess's paradox can be generalized to many different flows traversing the same graph at the same time.

Definition (Generalized flow). Let , and be as defined above, and suppose that we want to route the quantities through each distinct pair of nodes in . A flow is defined as an assignment of a real, nonnegative number to each path going from to , with the constraint that

The flow traversing a specific edge of is defined as

For succinctness, we write when are clear from context.

Definition (Nash-equilibrium flow). A flow is a Nash-equilibrium flow iff and from to

This definition is closely related to what we said about the support of mixed-strategy Nash equilibria in normal-form games.

Definition (Conditional welfare of a flow). Let and be two flows in associated with the same sets and . In what follows, we will drop the subscript to make the notation clearer. Assume to fix the latencies induced by on the graph: the conditional welfare of with respect to is defined as

Fact 1. Given a Nash-equilibrium flow and any other flow , .

Proof (By contradiction). Assume that . By definition,

.

Since and are associated with the same sets , we know that

Therefore, there must be a pair and two paths from to such that , , and

In other words, the flow can achieve a lower welfare than only if there are two paths from to having different costs, and if reroutes some flow of from the higher-cost path to the lower-cost path. This situation is clearly incompatible with the assumption that is a Nash-equilibrium flow.Q.E.D.

Note that Fact 1 does not assume any particular structure on the set .

Fact 2. Given any two real numbers and , .

Proof. This is another way to express the true inequality . Q.E.D.

Theorem. The pure PoA of any generalized routing problem with linear latencies is .

Proof. Note that this theorem is equivalent to saying that for each Nash-equilibrium flow , , where is any other flow. By definition,

By using Fact 2, we have that

since

We can conclude that , and prove the thesis using Fact 1. Q.E.D.

Note that in the proof we have made extensive use of the assumption that the functions in are linear. Actually, a more general fact holds.

Theorem. Given a generalized routing problem with graph and polynomial latency functions of degree with nonnegative coefficients, the pure PoA is .

Note that the PoA can grow with . Consider the example shown in the following figure, where we assume unit flow: the Nash-equilibrium flows have social welfare 1; however, the best welfare is achieved when , in which case

This quantity tends to zero when tends to infinity.

Further results

PoA upper bounds can be obtained if the game is shown to satisfy a so-called smoothness inequality. More precisely, a cost-minimimization game is (λ,μ)-smooth (with λ ≥ 0 and μ < 1) if the inequality

holds for any outcome a and a*. In this case, the PoA is upper bounded by λ/(1 − μ). [5]

For cost-sharing games with concave cost functions, the optimal cost-sharing rule that optimizes the price of anarchy, followed by the price of stability, is precisely the Shapley value cost-sharing rule. [6] (A symmetrical statement is similarly valid for utility-sharing games with convex utility functions.) In mechanism design, this means that the Shapley value solution concept is optimal for these sets of games.

Moreover, for these (finite) games it was proven that every equilibrium which achieves the PoA bound is fragile, in the sense that the agents demonstrate a state of indifference between their equilibrium action and the action they would pursue in a system-optimal outcome. [7]

See also

Related Research Articles

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

<span class="mw-page-title-main">Lambert series</span> Mathematical term

In mathematics, a Lambert series, named for Johann Heinrich Lambert, is a series taking the form

In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.

Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.

<span class="mw-page-title-main">Blancmange curve</span> Fractal curve resembling a blancmange pudding

In mathematics, the blancmange curve is a self-affine fractal curve constructible by midpoint subdivision. It is also known as the Takagi curve, after Teiji Takagi who described it in 1901, or as the Takagi–Landsberg curve, a generalization of the curve named after Takagi and Georg Landsberg. The name blancmange comes from its resemblance to a Blancmange pudding. It is a special case of the more general de Rham curve.

<span class="mw-page-title-main">Revenue equivalence</span>

Revenue equivalence is a concept in auction theory that states that given certain conditions, any mechanism that results in the same outcomes also has the same expected revenue.

In computational learning theory, Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions.

In game theory, the price of stability (PoS) of a game is the ratio between the best objective function value of one of its equilibria and that of an optimal outcome. The PoS is relevant for games in which there is some objective authority that can influence the players a bit, and maybe help them converge to a good Nash equilibrium. When measuring how efficient a Nash equilibrium is in a specific game we often also talk about the price of anarchy (PoA), which is the ratio between the worst objective function value of one of its equilibria and that of an optimal outcome.

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

In mathematics, a transformation of a sequence's generating function provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas applied to a sequence generating function or weighted sums over the higher-order derivatives of these functions.

<span class="mw-page-title-main">Price of anarchy in auctions</span>

The Price of Anarchy (PoA) is a concept in game theory and mechanism design that measures how the social welfare of a system degrades due to selfish behavior of its agents. It has been studied extensively in various contexts, particularly in auctions.

In number theory, the prime omega functions and count the number of prime factors of a natural number Thereby counts each distinct prime factor, whereas the related function counts the total number of prime factors of honoring their multiplicity. That is, if we have a prime factorization of of the form for distinct primes , then the respective prime omega functions are given by and . These prime factor counting functions have many important number theoretic relations.

In finance, an option on realized variance is a type of variance derivatives which is the derivative securities on which the payoff depends on the annualized realized variance of the return of a specified underlying asset, such as stock index, bond, exchange rate, etc. Another liquidated security of the same type is variance swap, which is, in other words, the futures contract on realized variance.

In finance, option on realized volatility is a subclass of derivatives securities that the payoff function embedded with the notion of annualized realized volatility of a specified underlying asset, which could be stock index, bond, foreign exchange rate, etc. Another product of volatility derivative that is widely traded refers to the volatility swap, which is in another word the forward contract on future realized volatility.

The Price of Anarchy (PoA) is a concept in game theory and mechanism design that measures how the social welfare of a system degrades due to selfish behavior of its agents. It has been studied extensively in various contexts, particularly in congestion games (CG).

References

  1. 1 2 Koutsoupias, Elias; Papadimitriou, Christos (May 2009). "Worst-case Equilibria". Computer Science Review. 3 (2): 65–69. doi:10.1016/j.cosrev.2009.04.003. Archived from the original on 2016-03-13. Retrieved 2010-09-12.
  2. M. Goemans, V. Mirrokni, A. Vetta, Sink equilibria and convergence , FOCS 05
  3. Chung, Christine; Ligett, Katrina; Pruhs, Kirk; Roth, Aaron (2008), Monien, Burkhard; Schroeder, Ulf-Peter (eds.), "The Price of Stochastic Anarchy", Algorithmic Game Theory, vol. 4997, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 303–314, doi:10.1007/978-3-540-79309-0_27, ISBN   978-3-540-79308-3 , retrieved 2023-12-29
  4. P. Dubey. Inefficiency of Nash equilibria. Math. Operat. Res., 11(1):1–8, 1986
  5. Roughgarden, Tim (2015-11-02). "Intrinsic Robustness of the Price of Anarchy". Journal of the ACM. 62 (5): 1–42. doi:10.1145/2806883. ISSN   0004-5411.
  6. Phillips, Matthew; Marden, Jason R. (July 2018). "Design Tradeoffs in Concave Cost-Sharing Games". IEEE Transactions on Automatic Control. 63 (7): 2242–2247. doi:10.1109/tac.2017.2765299. ISSN   0018-9286. S2CID   45923961.
  7. Seaton, Joshua H.; Brown, Philip N. (2023). "On the Intrinsic Fragility of the Price of Anarchy". IEEE Control Systems Letters. 7: 3573–3578. doi:10.1109/LCSYS.2023.3335315. ISSN   2475-1456.

Further reading