Single-crossing condition

Last updated
Example of two cumulative distribution functions F(x) and G(x) which satisfy the single-crossing condition. Single Crossing Condition example.png
Example of two cumulative distribution functions F(x) and G(x) which satisfy the single-crossing condition.

In monotone comparative statics, the single-crossing condition or single-crossing property refers to a condition where the relationship between two or more functions [note 1] is such that they will only cross once. [1] For example, a mean-preserving spread will result in an altered probability distribution whose cumulative distribution function will intersect with the original's only once.

Contents

The single-crossing condition was posited in Samuel Karlin's 1968 monograph 'Total Positivity'. [2] It was later used by Peter Diamond, Joseph Stiglitz, [3] and Susan Athey, [4] in studying the economics of uncertainty. [5]

The single-crossing condition is also used in applications where there are a few agents or types of agents that have preferences over an ordered set. Such situations appear often in information economics, contract theory, social choice and political economics, among other fields.

Example using cumulative distribution functions

Cumulative distribution functions F and G satisfy the single-crossing condition if there exists a such that

and

;

that is, function crosses the x-axis at most once, in which case it does so from below.

This property can be extended to two or more variables. [6] Given x and t, for all x'>x, t'>t,

and

.

This condition could be interpreted as saying that for x'>x, the function g(t)=F(x',t)-F(x,t) crosses the horizontal axis at most once, and from below. The condition is not symmetric in the variables (i.e., we cannot switch x and t in the definition; the necessary inequality in the first argument is weak, while the inequality in the second argument is strict).

Use in Social Choice

In the study of social choice, the single-crossing condition is a condition on preferences. It is especially useful because utility functions are generally increasing (i.e. the assumption that an agent will prefer or at least consider equivalent two dollars to one dollar is unobjectionable). [7]

Specifically, a set of agents with some unidimensional characteristic and preferences over different policies q satisfy the single crossing property when the following is true:

If and or if and , then

where W is the indirect utility function.

An important proposition extends the median voter theorem, which states that when voters have single peaked preferences, [8] a majority rule system has a Condorcet winner corresponding to the median voter's most preferred policy. With preferences that satisfy the single-crossing property, the most preferred policy of the voter with the median value of is the Condorcet winner. [9] In effect, this replaces the unidimensionality of policies with the unidimensionality of voter heterogeneity.

In this context, the single-crossing condition is sometimes referred to as the Gans-Smart condition. [10] [11]

Use in Mechanism Design

In mechanism design, the term single-crossing condition (often referred to as the Spence-Mirrlees property for Michael Spence and James Mirrlees, sometimes as the constant-sign assumption [12] ) refers to the requirement that the isoutility curve for agents of different types cross only once. [13] This condition guarantees that the transfer in an incentive-compatible direct mechanism can be pinned down by the transfer of the lowest type. This condition is similar to another condition called strict increasing difference (SID). [14] Formally, suppose the agent has a utility function , the SID says we have . The Spence-Mirrlees Property is characterized by .

See also

Notes

  1. The property need not only relate to continuous functions but can also similarly describe ordered sets or lattices.

Related Research Articles

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter and a scale parameter .
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman-Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman-Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error, but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman-Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.

<span class="mw-page-title-main">Mechanism design</span> Field in game theory

Mechanism design is a field in economics and game theory that takes an objectives-first approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics in such fields as market design, auction theory and social choice theory to networked-systems.

In Bayesian probability theory, if the posterior distribution is in the same probability distribution family as the prior probability distribution , the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function .

In mathematics, the Mahler measureof a polynomial with complex coefficients is defined as

In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.

The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes. It states that at equilibrium, each elementary process is in equilibrium with its reverse process.

In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models.

<span class="mw-page-title-main">Monotone likelihood ratio</span> Statistical property

In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions ƒ(x) and g(x) bear the property if

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In economics and consumer theory, quasilinear utility functions are linear in one argument, generally the numeraire. Quasilinear preferences can be represented by the utility function where is strictly concave. A useful property of the quasilinear utility function is that the Marshallian/Walrasian demand for does not depend on wealth and is thus not subject to a wealth effect; The absence of a wealth effect simplifies analysis and makes quasilinear utility functions a common choice for modelling. Furthermore, when utility is quasilinear, compensating variation (CV), equivalent variation (EV), and consumer surplus are algebraically equivalent. In mechanism design, quasilinear utility ensures that agents can compensate each other with side payments.

<span class="mw-page-title-main">Optical phase space</span> Phase space used in quantum optics

In quantum optics, an optical phase space is a phase space in which all quantum states of an optical system are described. Each point in the optical phase space corresponds to a unique state of an optical system. For any such system, a plot of the quadratures against each other, possibly as functions of time, is called a phase diagram. If the quadratures are functions of time then the optical phase diagram can show the evolution of a quantum optical system with time.

In particle physics, CLs represents a statistical method for setting upper limits on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.

In Bayesian statistics, the posterior predictive distribution is the distribution of possible unobserved values conditional on the observed values.

Monotone comparative statics is a sub-field of comparative statics that focuses on the conditions under which endogenous variables undergo monotone changes when there is a change in the exogenous parameters. Traditionally, comparative results in economics are obtained using the Implicit Function Theorem, an approach that requires the concavity and differentiability of the objective function as well as the interiority and uniqueness of the optimal solution. The methods of monotone comparative statics typically dispense with these assumptions. It focuses on the main property underpinning monotone comparative statics, which is a form of complementarity between the endogenous variable and exogenous parameter. Roughly speaking, a maximization problem displays complementarity if a higher value of the exogenous parameter increases the marginal return of the endogenous variable. This guarantees that the set of solutions to the optimization problem is increasing with respect to the exogenous parameter.

In the field of mathematics known as complex analysis, the indicator function of an entire function indicates the rate of growth of the function in different directions.

References

  1. Athey, S. (2002-02-01). "Monotone Comparative Statics under Uncertainty". The Quarterly Journal of Economics. 117 (1): 187–223. doi:10.1162/003355302753399481. ISSN   0033-5533. S2CID   14098229.
  2. Karlin, Samuel (1968). Total positivity. Vol. 1. Stanford University Press. OCLC   751230710.
  3. Diamond, Peter A.; Stiglitz, Joseph E. (1974). "Increases in risk and in risk aversion". Journal of Economic Theory. Elsevier. 8 (3): 337–360. doi:10.1016/0022-0531(74)90090-8. hdl: 1721.1/63799 .
  4. Athey, Susan (July 2001). "Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information". Econometrica. 69 (4): 861–889. doi:10.1111/1468-0262.00223. ISSN   0012-9682.
  5. Gollier, Christian (2001). The Economics of Risk and Time . The MIT Press. p.  103. ISBN   9780262072151.
  6. Rösler, Uwe (September 1992). "A fixed point theorem for distributions". Stochastic Processes and Their Applications. 42 (2): 195–214. doi: 10.1016/0304-4149(92)90035-O .
  7. Jewitt, Ian (January 1987). "Risk Aversion and the Choice Between Risky Prospects: The Preservation of Comparative Statics Results". The Review of Economic Studies. 54 (1): 73–85. doi:10.2307/2297447. JSTOR   2297447.
  8. Bredereck, Robert; Chen, Jiehua; Woeginger, Gerhard J. (October 2013). "A characterization of the single-crossing domain". Social Choice and Welfare. 41 (4): 989–998. doi:10.1007/s00355-012-0717-8. ISSN   0176-1714. S2CID   253845257.
  9. Persson, Torsten; Tabellini, Guido (2000). Political Economics: Explaining Economic Policy. MIT Press. p. 23. ISBN   9780262303668.
  10. Gans, Joshua S.; Smart, Michael (February 1996). "Majority voting with single-crossing preferences". Journal of Public Economics. 59 (2): 219–237. doi: 10.1016/0047-2727(95)01503-5 .
  11. Haavio, Markus; Kotakorpi, Kaisa (May 2011). "The political economy of sin taxes". European Economic Review. 55 (4): 575–594. doi:10.1016/j.euroecorev.2010.06.002. hdl: 10138/16733 . S2CID   2604940.
  12. Laffont, Jean-Jacques; Martimort, David (2002). The theory of incentives : the principal-agent model. Princeton, N.J.: Princeton University Press. p. 53. ISBN   0-691-09183-8. OCLC   47990008.
  13. Laffont, Jean-Jacques; Martimort, David (2002). The theory of incentives : the principal-agent model. Princeton, N.J.: Princeton University Press. p. 35. ISBN   0-691-09183-8. OCLC   47990008.
  14. Frankel, Alexander (2014-01-01). "Aligned Delegation". American Economic Review. 104 (1): 66–83. doi:10.1257/aer.104.1.66. ISSN   0002-8282.