Single-crossing condition

Last updated
Example of two cumulative distribution functions F(x) and G(x) which satisfy the single-crossing condition. Single Crossing Condition example.png
Example of two cumulative distribution functions F(x) and G(x) which satisfy the single-crossing condition.

In monotone comparative statics, the single-crossing condition or single-crossing property refers to a condition where the relationship between two or more functions [note 1] is such that they will only cross once. [1] For example, a mean-preserving spread will result in an altered probability distribution whose cumulative distribution function will intersect with the original's only once.

Contents

The single-crossing condition was posited in Samuel Karlin's 1968 monograph 'Total Positivity'. [2] It was later used by Peter Diamond, Joseph Stiglitz, [3] and Susan Athey, [4] in studying the economics of uncertainty. [5]

The single-crossing condition is also used in applications where there are a few agents or types of agents that have preferences over an ordered set. Such situations appear often in information economics, contract theory, social choice and political economics, among other fields.

Example using cumulative distribution functions

Cumulative distribution functions F and G satisfy the single-crossing condition if there exists a such that

and

;

that is, function crosses the x-axis at most once, in which case it does so from below.

This property can be extended to two or more variables. [6] Given x and t, for all x'>x, t'>t,

and

.

This condition could be interpreted as saying that for x'>x, the function g(t)=F(x',t)-F(x,t) crosses the horizontal axis at most once, and from below. The condition is not symmetric in the variables (i.e., we cannot switch x and t in the definition; the necessary inequality in the first argument is weak, while the inequality in the second argument is strict).

Use in social choice and mechanism design

Social choice

In social choice theory, the single-crossing condition is a condition on preferences. It is especially useful because utility functions are generally increasing (i.e. the assumption that an agent will prefer or at least consider equivalent two dollars to one dollar is unobjectionable). [7]

Specifically, a set of agents with some unidimensional characteristic and preferences over different policies q satisfy the single crossing property when the following is true:

If and or if and , then

where W is the indirect utility function.

An important result extends the median voter theorem, which states that when voters have single peaked preferences, there is a majority-preferred candidate corresponding to the median voter's most preferred policy. [8] With single-crossing preferences, the most preferred policy of the voter with the median value of is the Condorcet winner. [9] In effect, this replaces the unidimensionality of policies with the unidimensionality of voter heterogeneity.[ jargon ] [10] In this context, the single-crossing condition is sometimes referred to as the Gans-Smart condition. [11]

Mechanism design

In mechanism design, the single-crossing condition (often referred to as the Spence-Mirrlees property for Michael Spence and James Mirrlees, sometimes as the constant-sign assumption [12] ) refers to the requirement that the isoutility curves for agents of different types cross only once. [13] This condition guarantees that the transfer in an incentive-compatible direct mechanism can be pinned down by the transfer of the lowest type. This condition is similar to another condition called strict increasing difference (SID). [14] Formally, suppose the agent has a utility function , the SID says we have . The Spence-Mirrlees Property is characterized by .

See also

Notes

  1. The property need not only relate to continuous functions but can also similarly describe ordered sets or lattices.

Related Research Articles

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error, but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.

<span class="mw-page-title-main">Mechanism design</span> Field of economics and game theory

Mechanism design, sometimes called implementation theory or institutiondesign, is a branch of economics, social choice, and game theory that deals with designing game forms to implement a given social choice function. Because it starts with the end of the game and then works backwards to find a game that implements it, it is sometimes described as reverse game theory.

In Bayesian probability theory, if, given a likelihood function , the posterior distribution is in the same probability distribution family as the prior probability distribution , the prior and posterior are then called conjugate distributions with respect to that likelihood function and the prior is called a conjugate prior for the likelihood function .

In mathematics, the Mahler measureof a polynomial with complex coefficients is defined as

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.

In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

<span class="mw-page-title-main">Monotone likelihood ratio</span> Statistical property

In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions and bear the property if

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Half-normal distribution</span> Probability distribution

In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.

In economics and consumer theory, quasilinear utility functions are linear in one argument, generally the numeraire. Quasilinear preferences can be represented by the utility function where is strictly concave. A useful property of the quasilinear utility function is that the Marshallian/Walrasian demand for does not depend on wealth and is thus not subject to a wealth effect; The absence of a wealth effect simplifies analysis and makes quasilinear utility functions a common choice for modelling. Furthermore, when utility is quasilinear, compensating variation (CV), equivalent variation (EV), and consumer surplus are algebraically equivalent. In mechanism design, quasilinear utility ensures that agents can compensate each other with side payments.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

In particle physics, CLs represents a statistical method for setting upper limits on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.

Monotone comparative statics is a sub-field of comparative statics that focuses on the conditions under which endogenous variables undergo monotone changes when there is a change in the exogenous parameters. Traditionally, comparative results in economics are obtained using the Implicit Function Theorem, an approach that requires the concavity and differentiability of the objective function as well as the interiority and uniqueness of the optimal solution. The methods of monotone comparative statics typically dispense with these assumptions. It focuses on the main property underpinning monotone comparative statics, which is a form of complementarity between the endogenous variable and exogenous parameter. Roughly speaking, a maximization problem displays complementarity if a higher value of the exogenous parameter increases the marginal return of the endogenous variable. This guarantees that the set of solutions to the optimization problem is increasing with respect to the exogenous parameter.

In dual decomposition a problem is broken into smaller subproblems and a solution to the relaxed problem is found. This method can be employed for MRF optimization. Dual decomposition is applied to markov logic programs as an inference technique.

References

  1. Athey, S. (2002-02-01). "Monotone Comparative Statics under Uncertainty". The Quarterly Journal of Economics. 117 (1): 187–223. doi:10.1162/003355302753399481. ISSN   0033-5533. S2CID   14098229.
  2. Karlin, Samuel (1968). Total positivity. Vol. 1. Stanford University Press. OCLC   751230710.
  3. Diamond, Peter A.; Stiglitz, Joseph E. (1974). "Increases in risk and in risk aversion". Journal of Economic Theory. 8 (3). Elsevier: 337–360. doi:10.1016/0022-0531(74)90090-8. hdl: 1721.1/63799 .
  4. Athey, Susan (July 2001). "Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information". Econometrica. 69 (4): 861–889. doi:10.1111/1468-0262.00223. hdl: 1721.1/64195 . ISSN   0012-9682.
  5. Gollier, Christian (2001). The Economics of Risk and Time . The MIT Press. p.  103. ISBN   9780262072151.
  6. Rösler, Uwe (September 1992). "A fixed point theorem for distributions". Stochastic Processes and Their Applications. 42 (2): 195–214. doi: 10.1016/0304-4149(92)90035-O .
  7. Jewitt, Ian (January 1987). "Risk Aversion and the Choice Between Risky Prospects: The Preservation of Comparative Statics Results". The Review of Economic Studies. 54 (1): 73–85. doi:10.2307/2297447. JSTOR   2297447.
  8. Bredereck, Robert; Chen, Jiehua; Woeginger, Gerhard J. (October 2013). "A characterization of the single-crossing domain". Social Choice and Welfare. 41 (4): 989–998. doi:10.1007/s00355-012-0717-8. ISSN   0176-1714. S2CID   253845257.
  9. Persson, Torsten; Tabellini, Guido (2000). Political Economics: Explaining Economic Policy. MIT Press. p. 23. ISBN   9780262303668.
  10. Gans, Joshua S.; Smart, Michael (February 1996). "Majority voting with single-crossing preferences". Journal of Public Economics. 59 (2): 219–237. doi: 10.1016/0047-2727(95)01503-5 .
  11. Haavio, Markus; Kotakorpi, Kaisa (May 2011). "The political economy of sin taxes". European Economic Review. 55 (4): 575–594. doi:10.1016/j.euroecorev.2010.06.002. hdl: 10138/16733 . S2CID   2604940.
  12. Laffont, Jean-Jacques; Martimort, David (2002). The theory of incentives : the principal-agent model. Princeton, N.J.: Princeton University Press. p. 53. ISBN   0-691-09183-8. OCLC   47990008.
  13. Laffont, Jean-Jacques; Martimort, David (2002). The theory of incentives : the principal-agent model. Princeton, N.J.: Princeton University Press. p. 35. ISBN   0-691-09183-8. OCLC   47990008.
  14. Frankel, Alexander (2014-01-01). "Aligned Delegation". American Economic Review. 104 (1): 66–83. doi:10.1257/aer.104.1.66. ISSN   0002-8282.