Market equilibrium computation (also called competitive equilibrium computation or clearing-prices computation) is a computational problem in the intersection of economics and computer science. The input to this problem is a market, consisting of a set of resources and a set of agents. There are various kinds of markets, such as Fisher market and Arrow–Debreu market, with divisible or indivisible resources. The required output is a competitive equilibrium , consisting of a price-vector (a price for each resource), and an allocation (a resource-bundle for each agent), such that each agent gets the best bundle possible (for him) given the budget, and the market clears (all resources are allocated).
Market equilibrium computation is interesting due to the fact that a competitive equilibrium is always Pareto efficient. The special case of a Fisher market, in which all buyers have equal incomes, is particularly interesting, since in this setting a competitive equilibrium is also envy-free. Therefore, market equilibrium computation is a way to find an allocation which is both fair and efficient.
Since the 1960s, there has been attempts to apply the general equilibrium theory to support policy decisions in subjects such as tax reform or simultaneous tariff reductions. These models are typically large, so efficient comnputation is needed. [1]
The input to the market-equilibrium-computation consists of the following ingredients: [2] : chap.5
The required output should contain the following ingredients:
The output should satisfy the following requirements:
A price and allocation satisfying these requirements are called a competitive equilibrium (CE) or a market equilibrium; the prices are also called equilibrium prices or clearing prices.
Market equilibrium computation has been studied under various assumptions regarding the agents' utility functions.
Herbert Scarf [3] presented a proof of existence of a CE using Sperner's lemma (see Fisher market). He converted this proof to an algorithm for computing an approximate CE. In his later work, he continued to develop these algorithms. [4]
Merrill [5] gave an extended algorithm for approximate CE.
Other algorithms for fixed-point computation, such as the homotopy method, can also be used to compute CE.
All these algorithms do not have a polynomial runtime guarantee.
Papadimitriou (who invented the class PPAD) [6] proved that computing an approximate CE for Arrow-Debreu markets given by aggregate excess demand functions is PPAD-complete. Later results have shown PPAD-hardness even for more specific classes of utility functions:
Complementing these results, Garg, Mehta, Vazirani and Yazdanbod [12] show that computing an approximate CE with PLC utilities is in PPAD. The main technical challenge was to show that an approximate fixed-point corresponds to an approximate CE.
Etessami and Yannakkis (who defined the complexity class FIXP) [13] proved that computing CE prices for exchange markets with algebraic demand functions is FIXP-complete. Later results have shown FIXP-hardness for more specific classes of utilities:
For some special cases, polynomial-time algorithms have been developed.
Eaves [15] showed that, in an exchange market with Cobb-Douglas utilities, the CE be written as the solution to a linear program; hence it is possible to compute all CE in polynomial time.
Deng, Papadimitriou and Safra [16] : Thm.2 present a polytime algorithm for finding the CE when m is bounded and the utilities are linear.
Kakade, Kearns and Ortiz [17] : Sub.5.1 generalize the above algorithm for bounded m. Their generalized algorithm computes an approximate CE for a general class of non-linear utility functions.
Newman and Primak [18] studied two variants of the ellipsoid method for finding a CE in an Arrow-Debreu market with linear utilities. They prove that the inscribed ellipsoid method is more computationally efficient than the circumscribed ellipsoid method.
Codenotti and Varadarajan [19] gave a polytime algorithm for Fisher markes with Leontief utilities. Their approach extends to a wider family of utilities, which includes CES utilities. However, unlike in the linear case, the equilibrium prices can be irrational, whicn means that an exact computation is not possible.
Codenotti, McCune, Penumatcha and Varadarajan [20] gave a polytime algorithm for Arrow-Debreu markes with CES utilities where the elasticity of substitution is at least 1/2.
Codenotti, Pemmaraju, Raman and Varadarajan [21] presented a polytime algorithm for exchange markets with weak gross substitute utilities; these generalize linear, Cobb-Douglas, CES and even some non-homogeneous utility functions.
Chen, Deng, Sun and Yao [22] gave a polytime algorithm for Fisher markets with logarithmic utilities, when either m or n is constant.
Kamal Jain [23] introduced a convex program (already described in 1983 by Nenakov and Primak) that characterizes the CE for exchange markets with linear utilities, CES utilities with r>0, and some other utility functions. He also proved that for linear utilities there exists a normalized CE with rational prices. Jain used this property to develop a variant of the ellipsoid method to compute the CE exactly in polytime. Later, Ye [24] showed how to use Interior-point methods, which are much more efficient in practice. Codenotti and Varadarajan [25] presented a different convex program that characterizes the CE also for CES utilities with -1 < r < 0.
Devanur, Papadimitriou, Saberi and Vazirani [26] gave a polynomial-time algorithm for exactly computing an equilibrium for Fisher markets with linear utility functions. Their algorithm uses the primal–dual paradigm in the enhanced setting of KKT conditions and convex programs. Their algorithm is weakly-polynomial: it solves maximum flow problems, and thus it runs in time , where umax and Bmax are the maximum utility and budget, respectively.
Orlin [27] gave an improved algorithm for a Fisher market model with linear utilities, running in time . He then improved his algorithm to run in strongly-polynomial time: .
Devanur and Kannan [8] gave algorithms for Arrow-Debreu markets with concave utility functions, where all resources are goods (the utilities are positive):
Garg, Mehta, Vazirani and Yazdanbod [12] gave a polytime algorithm for Leontief utilities when n is constant and m is variable.
Bogomolnaia and Moulin and Sandomirskiy and Yanovskaia studied the existence and properties of CE in a Fisher market with bads (items with negative utilities) [28] and with a mixture of goods and bads. [29] In contrast to the setting with goods, when the resources are bads the CE does not solve any convex optimization problem even with linear utilities. CE allocations correspond to local minima, local maxima, and saddle points of the product of utilities on the Pareto frontier of the set of feasible utilities. The CE rule becomes multivalued. This work has led to several works on algorithms of finding CE in such markets:
If both n and m are variable, the problem becomes computationally hard:
When the goods are indivisible, a CE may not exist, but it may be possible to compute an approximate CE.
Deng, Papadimitriou and Safra [16] study exchange markets with m goods, that may be indivisible. They show the following:
When the utilities are linear, the bang-per-buck of agent i (also called BPB or utility-per-coin) is defined as the utility of i divided by the price paid. The BPB of a single resource is ; the total BPB is .
A key observation for finding a CE in a Fisher market with linear utilities is that, in any CE and for any agent i: [2]
Assume that every product has a potential buyer - a buyer with . Then, the above inequalities imply that , i.e, all prices are positive.
Cell decomposition [8] is a process of partitioning the space of possible prices into small "cells", either by hyperplanes or, more generally, by polynomial surfaces. A cell is defined by specifying on which side of each of these surfaces it lies (with polynomial surfaces, the cells are also known as semialgebraic sets). For each cell, we either find a market-clearing price-vector (i.e., a price in that cell for which a market-clearing allocation exists), or verify that the cell does not contain a market-clearing price-vector. The challenge is to find a decomposition with the following properties:
If the utilities of all agents are homogeneous functions, then the equilibrium conditions in the Fisher model can be written as solutions to a convex optimization program called the Eisenberg-Gale convex program. [34] This program finds an allocation that maximizes the weighted geometric mean of the buyers' utilities, where the weights are determined by the budgets. Equivalently, it maximizes the weighted arithmetic mean of the logarithms of the utilities:
(since supplies are normalized to 1).
This optimization problem can be solved using the Karush–Kuhn–Tucker conditions (KKT). These conditions introduce Lagrangian multipliers that can be interpreted as the prices, . In every allocation that maximizes the Eisenberg-Gale program, every buyer receives a demanded bundle. I.e, a solution to the Eisenberg-Gale program represents a market equilibrium. [2] : 141–142
A special case of homogeneous utilities is when all buyers have linear utility functions. We assume that each resource has a potential buyer - a buyer that derives positive utility from that resource. Under this assumption, market-clearing prices exist and are unique. The proof is based on the Eisenberg-Gale program. The KKT conditions imply that the optimal solutions (allocations and prices ) satisfy the following inequalities:
Assume that every product has a potential buyer - a buyer with . Then, inequality 3 implies that , i.e, all prices are positive. Then, inequality 2 implies that all supplies are exhausted. Inequality 4 implies that all buyers' budgets are exhausted. I.e, the market clears. Since the log function is a strictly concave function, if there is more than one equilibrium allocation then the utility derived by each buyer in both allocations must be the same (a decrease in the utility of a buyer cannot be compensated by an increase in the utility of another buyer). This, together with inequality 4, implies that the prices are unique. [2] : 107
Vazirani [2] : 109–121 presented an algorithm for finding equilibrium prices and allocations in a linear Fisher market. The algorithm is based on condition 4 above. The condition implies that, in equilibrium, every buyer buys only products that give him maximum BPB. Let's say that a buyer "likes" a product, if that product gives him maximum BPB in the current prices. Given a price-vector, construct a flow network in which the capacity of each edge represents the total money "flowing" through that edge. The network is as follows:
The price-vector p is an equilibrium price-vector, if and only if the two cuts ({s},V\{s}) and (V\{t},{t}) are min-cuts. Hence, an equilibrium price-vector can be found using the following scheme:
There is an algorithm that solves this problem in weakly polynomial time.
Kakade, Kearns and Ortiz [17] studied a generalized Arrow-Debreu market in which agents are located on a graph, trade may occur only between neighboring agents, and all the local markets must clear. They proved a general existence theorem for graphical equilibria, and an algorithm for computing graphucak equilibria which runs in time polynomial in the number of consumers when the graph is a tree. Their algorithms work also for agents with non-linear utilities.
Gao, Peysakhovich and Kroer [35] presented an algorithm for online computation of market equilibrium.