This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)

Mechanism design is a field in economics and game theory that takes an objectivesfirst approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics in such fields as market design, auction theory and social choice theory to networkedsystems (internet interdomain routing, sponsored search auctions).
Mechanism design studies solution concepts for a class of privateinformation games. Leonid Hurwicz explains that 'in a design problem, the goal function is the main "given", while the mechanism is the unknown. Therefore, the design problem is the "inverse" of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism.'^{ [1] } So, two distinguishing features of these games are:
The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson "for having laid the foundations of mechanism design theory".^{ [2] }
In an interesting class of Bayesian games, one player, called the "principal", would like to condition his behavior on information privately known to other players. For example, the principal would like to know the true quality of a used car a salesman is pitching. He cannot learn anything simply by asking the salesman, because it is in the salesman's interest to distort the truth. However, in mechanism design the principal does have one advantage: He may design a game whose rules can influence others to act the way he would like.
Without mechanism design theory, the principal's problem would be difficult to solve. He would have to consider all the possible games and choose the one that best influences other players' tactics. In addition, the principal would have to draw conclusions from agents who may lie to him. Thanks to mechanism design, and particularly the revelation principle, the principal only needs to consider games in which agents truthfully report their private information.
A game of mechanism design is a game of private information in which one of the agents, called the principal, chooses the payoff structure. Following Harsanyi ( 1967 ), the agents receive secret "messages" from nature containing information relevant to payoffs. For example, a message may contain information about their preferences or the quality of a good for sale. We call this information the agent's "type" (usually noted and accordingly the space of types ). Agents then report a type to the principal (usually noted with a hat ) that can be a strategic lie. After the report, the principal and the agents are paid according to the payoff structure the principal chose.
The timing of the game is:
In order to understand who gets what, it is common to divide the outcome into a goods allocation and a money transfer, where stands for an allocation of goods rendered or received as a function of type, and stands for a monetary transfer as a function of type.
As a benchmark the designer often defines what would happen under full information. Define a mapping the (true) type profile directly to the allocation of goods received or rendered,
In contrast a mechanism maps the reported type profile to an outcome (again, both a goods allocation and a money transfer )
A proposed mechanism constitutes a Bayesian game (a game of private information), and if it is wellbehaved the game has a Bayesian Nash equilibrium. At equilibrium agents choose their reports strategically as a function of type
It is difficult to solve for Bayesian equilibria in such a setting because it involves solving for agents' bestresponse strategies and for the best inference from a possible strategic lie. Thanks to a sweeping result called the revelation principle, no matter the mechanism a designer can^{ [3] } confine attention to equilibria in which agents truthfully report type. The revelation principle states: "To every Bayesian Nash equilibrium there corresponds a Bayesian game with the same equilibrium outcome but in which players truthfully report type."
This is extremely useful. The principle allows one to solve for a Bayesian equilibrium by assuming all players truthfully report type (subject to an incentive compatibility constraint). In one blow it eliminates the need to consider either strategic behavior or lying.
Its proof is quite direct. Assume a Bayesian game in which the agent's strategy and payoff are functions of its type and what others do, . By definition agent i's equilibrium strategy is Nash in expected utility:
Simply define a mechanism that would induce agents to choose the same equilibrium. The easiest one to define is for the mechanism to commit to playing the agents' equilibrium strategies for them.
Under such a mechanism the agents of course find it optimal to reveal type since the mechanism plays the strategies they found optimal anyway. Formally, choose such that
The designer of a mechanism generally hopes either
To implement a social choice function is to find some transfer function that motivates agents to pick . Formally, if the equilibrium strategy profile under the mechanism maps to the same goods allocation as a social choice function,
we say the mechanism implements the social choice function.
Thanks to the revelation principle, the designer can usually find a transfer function to implement a social choice by solving an associated truthtelling game. If agents find it optimal to truthfully report type,
we say such a mechanism is truthfully implementable (or just "implementable"). The task is then to solve for a truthfully implementable and impute this transfer function to the original game. An allocation is truthfully implementable if there exists a transfer function such that
which is also called the incentive compatibility (IC) constraint.
In applications, the IC condition is the key to describing the shape of in any useful way. Under certain conditions it can even isolate the transfer function analytically. Additionally, a participation (individual rationality) constraint is sometimes added if agents have the option of not playing.
Consider a setting in which all agents have a typecontingent utility function . Consider also a goods allocation that is vectorvalued and size (which permits number of goods) and assume it is piecewise continuous with respect to its arguments.
The function is implementable only if
whenever and and x is continuous at . This is a necessary condition and is derived from the first and secondorder conditions of the agent's optimization problem assuming truthtelling.
Its meaning can be understood in two pieces. The first piece says the agent's marginal rate of substitution (MRS) increases as a function of the type,
In short, agents will not tell the truth if the mechanism does not offer higher agent types a better deal. Otherwise, higher types facing any mechanism that punishes high types for reporting will lie and declare they are lower types, violating the truthtelling IC constraint. The second piece is a monotonicity condition waiting to happen,
which, to be positive, means higher types must be given more of the good.
There is potential for the two pieces to interact. If for some type range the contract offered less quantity to higher types , it is possible the mechanism could compensate by giving higher types a discount. But such a contract already exists for lowtype agents, so this solution is pathological. Such a solution sometimes occurs in the process of solving for a mechanism. In these cases it must be "ironed." In a multiplegood environment it is also possible for the designer to reward the agent with more of one good to substitute for less of another (e.g. butter for margarine). Multiplegood mechanisms are an ongoing problem in mechanism design theory.
Mechanism design papers usually make two assumptions to ensure implementability:
This is known by several names: the singlecrossing condition, the sorting condition and the Spence–Mirrlees condition. It means the utility function is of such a shape that the agent's MRS is increasing in type.
This is a technical condition bounding the rate of growth of the MRS.
These assumptions are sufficient to provide that any monotonic is implementable (a exists that can implement it). In addition, in the singlegood setting the singlecrossing condition is sufficient to provide that only a monotonic is implementable, so the designer can confine his search to a monotonic .
Vickrey ( 1961 ) gives a celebrated result that any member of a large class of auctions assures the seller of the same expected revenue and that the expected revenue is the best the seller can do. This is the case if
The last condition is crucial to the theorem. An implication is that for the seller to achieve higher revenue he must take a chance on giving the item to an agent with a lower valuation. Usually this means he must risk not selling the item at all.
The Vickrey (1961) auction model was later expanded by Clarke ( 1971 ) and Groves to treat a public choice problem in which a public project's cost is borne by all agents, e.g. whether to build a municipal bridge. The resulting "Vickrey–Clarke–Groves" mechanism can motivate agents to choose the socially efficient allocation of the public good even if agents have privately known valuations. In other words, it can solve the "tragedy of the commons"—under certain conditions, in particular quasilinear utility or if budget balance is not required.
Consider a setting in which number of agents have quasilinear utility with private valuations where the currency is valued linearly. The VCG designer designs an incentive compatible (hence truthfully implementable) mechanism to obtain the true type profile, from which the designer implements the socially optimal allocation
The cleverness of the VCG mechanism is the way it motivates truthful revelation. It eliminates incentives to misreport by penalizing any agent by the cost of the distortion he causes. Among the reports the agent may make, the VCG mechanism permits a "null" report saying he is indifferent to the public good and cares only about the money transfer. This effectively removes the agent from the game. If an agent does choose to report a type, the VCG mechanism charges the agent a fee if his report is pivotal, that is if his report changes the optimal allocation x so as to harm other agents. The payment is calculated
which sums the distortion in the utilities of the other agents (and not his own) caused by one agent reporting.
Gibbard ( 1973 ) and Satterthwaite ( 1975 ) give an impossibility result similar in spirit to Arrow's impossibility theorem. For a very general class of games, only "dictatorial" social choice functions can be implemented.
A social choice function f() is dictatorial if one agent always receives his mostfavored goods allocation,
The theorem states that under general conditions any truthfully implementable social choice function must be dictatorial if,
Myerson andSatterthwaite ( 1983 ) show there is no efficient way for two parties to trade a good when they each have secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss. It is among the most remarkable negative results in economics—a kind of negative mirror to the fundamental theorems of welfare economics.
Phillips and Marden (2018) proved that for costsharing games with concave cost functions, the optimal costsharing rule that firstly optimizes the worstcase inefficiencies in a game (the price of anarchy), and then secondly optimizes the bestcase outcomes (the price of stability), is precisely the Shapley value costsharing rule.^{ [4] } A symmetrical statement is similarly valid for utilitysharing games with convex utility functions.
Mirrlees ( 1971 ) introduces a setting in which the transfer function t() is easy to solve for. Due to its relevance and tractability it is a common setting in the literature. Consider a singlegood, singleagent setting in which the agent has quasilinear utility with an unknown type parameter
and in which the principal has a prior CDF over the agent's type . The principal can produce goods at a convex marginal cost c(x) and wants to maximize the expected profit from the transaction
subject to IC and IR conditions
The principal here is a monopolist trying to set a profitmaximizing price scheme in which it cannot identify the type of the customer. A common example is an airline setting fares for business, leisure and student travelers. Due to the IR condition it has to give every type a good enough deal to induce participation. Due to the IC condition it has to give every type a good enough deal that the type prefers its deal to that of any other.
A trick given by Mirrlees (1971) is to use the envelope theorem to eliminate the transfer function from the expectation to be maximized,
Integrating,
where is some index type. Replacing the incentivecompatible in the maximand,
after an integration by parts. This function can be maximized pointwise.
Because is incentivecompatible already the designer can drop the IC constraint. If the utility function satisfies the Spence–Mirrlees condition then a monotonic function exists. The IR constraint can be checked at equilibrium and the fee schedule raised or lowered accordingly. Additionally, note the presence of a hazard rate in the expression. If the type distribution bears the monotone hazard ratio property, the FOC is sufficient to solve for t(). If not, then it is necessary to check whether the monotonicity constraint (see sufficiency, above) is satisfied everywhere along the allocation and fee schedules. If not, then the designer must use Myerson ironing.
In some applications the designer may solve the firstorder conditions for the price and allocation schedules yet find they are not monotonic. For example, in the quasilinear setting this often happens when the hazard ratio is itself not monotone. By the Spence–Mirrlees condition the optimal price and allocation schedules must be monotonic, so the designer must eliminate any interval over which the schedule changes direction by flattening it.
Intuitively, what is going on is the designer finds it optimal to bunch certain types together and give them the same contract. Normally the designer motivates higher types to distinguish themselves by giving them a better deal. If there are insufficiently few higher types on the margin the designer does not find it worthwhile to grant lower types a concession (called their information rent) in order to charge higher types a typespecific contract.
Consider a monopolist principal selling to agents with quasilinear utility, the example above. Suppose the allocation schedule satisfying the firstorder conditions has a single interior peak at and a single interior trough at , illustrated at right.
The proof uses the theory of optimal control. It considers the set of intervals in the nonmonotonic region of over which it might flatten the schedule. It then writes a Hamiltonian to obtain necessary conditions for a within the intervals
Condition two ensures that the satisfying the optimal control problem reconnects to the schedule in the original problem at the interval boundaries (no jumps). Any satisfying the necessary conditions must be flat because it must be monotonic and yet reconnect at the boundaries.
As before maximize the principal's expected payoff, but this time subject to the monotonicity constraint
and use a Hamiltonian to do it, with shadow price
where is a state variable and the control. As usual in optimal control the costate evolution equation must satisfy
Taking advantage of condition 2, note the monotonicity constraint is not binding at the boundaries of the interval,
meaning the costate variable condition can be integrated and also equals 0
The average distortion of the principal's surplus must be 0. To flatten the schedule, find an such that its inverse image maps to a interval satisfying the condition above.
In mathematics and physics, Laplace's equation is a secondorder partial differential equation named after PierreSimon Laplace, who first studied its properties. This is often written as
The likelihood function describes the joint probability of the observed data as a function of the parameters of the chosen statistical model. For each specific parameter value in the parameter space, the likelihood function therefore assigns a probabilistic prediction to the observed data . Since it is essentially the product of sampling densities, the likelihood generally encapsulates both the datagenerating process as well as the missingdata mechanism that produced the observed sample.
In physics, the Navier–Stokes equations are certain partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist ClaudeLouis Navier and AngloIrish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields.
Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. The terms "distribution" and "family" are often used loosely: specifically, an exponential family is a set of distributions, where the specific distribution varies with the parameter; however, a parametric family of distributions is often referred to as "a distribution", and the set of all exponential families is sometimes loosely referred to as "the" exponential family. They are distinct because they possess a variety of desirable properties, most importantly the existence of a sufficient statistic.
In mathematics, the symmetry of second derivatives refers to the possibility of interchanging the order of taking partial derivatives of a function
In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.
In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.
In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic parameter, the variance of any such estimator is at least as high as the inverse of the Fisher information. Equivalently, it expresses an upper bound on the precision of unbiased estimators: the precision of any such estimator is at most the Fisher information. The result is named in honor of Harald Cramér and C. R. Rao, but has independently also been derived by Maurice Fréchet, Georges Darmois, as well as Alexander Aitken and Harold Silverstone.
In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.
In mathematical statistics, the Kullback–Leibler divergence, denoted , is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model when the actual distribution is P. While it is a distance, it is not a metric, the most familiar type of distance: it is not symmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.
In mathematics, the eigenvalue problem for the Laplace operator is known as the Helmholtz equation. It corresponds to the linear partial differential equation
In statistics, Mestimators are a broad class of extremum estimators for which the objective function is a sample average. Both nonlinear least squares and maximum likelihood estimation are special cases of Mestimators. The definition of Mestimators was motivated by robust statistics, which contributed new types of Mestimators. The statistical procedure of evaluating an Mestimator on a data set is called Mestimation. 48 samples of robust Mestimators can be found in a recent review study.
In differential geometry, the notion of torsion is a manner of characterizing a twist or screw of a moving frame around a curve. The torsion of a curve, as it appears in the Frenet–Serret formulas, for instance, quantifies the twist of a curve about its tangent vector as the curve evolves. In the geometry of surfaces, the geodesic torsion describes how a surface twists about a curve on the surface. The companion notion of curvature measures how moving frames "roll" along a curve "without twisting".
In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions ƒ(x) and g(x) bear the property if
Frontogenesis is a meteorological process of tightening of horizontal temperature gradients to produce fronts. In the end, two types of fronts form: cold fronts and warm fronts. A cold front is a narrow line where temperature decreases rapidly. A warm front is a narrow line of warmer temperatures and essentially where much of the precipitation occurs. Frontogenesis occurs as a result of a developing baroclinic wave. According to Hoskins & Bretherton, there are eight mechanisms that influence temperature gradients: horizontal deformation, horizontal shearing, vertical deformation, differential vertical motion, latent heat release, surface friction, turbulence and mixing, and radiation. Semigeostrophic frontogenesis theory focuses on the role of horizontal deformation and shear.
In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.