In monotone comparative statics, the single-crossing condition or single-crossing property refers to a condition where the relationship between two or more functions [note 1] is such that they will only cross once. [1] For example, a mean-preserving spread will result in an altered probability distribution whose cumulative distribution function will intersect with the original's only once.
The single-crossing condition was posited in Samuel Karlin's 1968 monograph 'Total Positivity'. [2] It was later used by Peter Diamond, Joseph Stiglitz, [3] and Susan Athey, [4] in studying the economics of uncertainty. [5]
The single-crossing condition is also used in applications where there are a few agents or types of agents that have preferences over an ordered set. Such situations appear often in information economics, contract theory, social choice and political economics, among other fields.
Cumulative distribution functions F and G satisfy the single-crossing condition if there exists a such that
and
;
that is, function crosses the x-axis at most once, in which case it does so from below.
This property can be extended to two or more variables. [6] Given x and t, for all x'>x, t'>t,
and
.
This condition could be interpreted as saying that for x'>x, the function g(t)=F(x',t)-F(x,t) crosses the horizontal axis at most once, and from below. The condition is not symmetric in the variables (i.e., we cannot switch x and t in the definition; the necessary inequality in the first argument is weak, while the inequality in the second argument is strict).
In social choice theory, the single-crossing condition is a condition on preferences. It is especially useful because utility functions are generally increasing (i.e. the assumption that an agent will prefer or at least consider equivalent two dollars to one dollar is unobjectionable). [7]
Specifically, a set of agents with some unidimensional characteristic and preferences over different policies q satisfy the single crossing property when the following is true:
If and or if and , then
where W is the indirect utility function.
An important result extends the median voter theorem, which states that when voters have single peaked preferences, there is a majority-preferred candidate corresponding to the median voter's most preferred policy. [8] With single-crossing preferences, the most preferred policy of the voter with the median value of is the Condorcet winner. [9] In effect, this replaces the unidimensionality of policies with the unidimensionality of voter heterogeneity.[ jargon ] [10] In this context, the single-crossing condition is sometimes referred to as the Gans-Smart condition. [11]
In mechanism design, the single-crossing condition (often referred to as the Spence-Mirrlees property for Michael Spence and James Mirrlees, sometimes as the constant-sign assumption [12] ) refers to the requirement that the isoutility curves for agents of different types cross only once. [13] This condition guarantees that the transfer in an incentive-compatible direct mechanism can be pinned down by the transfer of the lowest type. This condition is similar to another condition called strict increasing difference (SID). [14] Formally, suppose the agent has a utility function , the SID says we have . The Spence-Mirrlees Property is characterized by .
In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:
In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.
In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error, but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.
Mechanism design, sometimes called implementation theory or institutiondesign, is a branch of economics, social choice, and game theory that deals with designing game forms to implement a given social choice function. Because it starts with the end of the game and then works backwards to find a game that implements it, it is sometimes described as reverse game theory.
In Bayesian probability theory, if, given a likelihood function , the posterior distribution is in the same probability distribution family as the prior probability distribution , the prior and posterior are then called conjugate distributions with respect to that likelihood function and the prior is called a conjugate prior for the likelihood function .
In mathematics, the Mahler measureof a polynomial with complex coefficients is defined as
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.
In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.
In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.
In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions and bear the property if
In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.
In economics and consumer theory, quasilinear utility functions are linear in one argument, generally the numeraire. Quasilinear preferences can be represented by the utility function where is strictly concave. A useful property of the quasilinear utility function is that the Marshallian/Walrasian demand for does not depend on wealth and is thus not subject to a wealth effect; The absence of a wealth effect simplifies analysis and makes quasilinear utility functions a common choice for modelling. Furthermore, when utility is quasilinear, compensating variation (CV), equivalent variation (EV), and consumer surplus are algebraically equivalent. In mechanism design, quasilinear utility ensures that agents can compensate each other with side payments.
A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.
In particle physics, CLs represents a statistical method for setting upper limits on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.
Monotone comparative statics is a sub-field of comparative statics that focuses on the conditions under which endogenous variables undergo monotone changes when there is a change in the exogenous parameters. Traditionally, comparative results in economics are obtained using the Implicit Function Theorem, an approach that requires the concavity and differentiability of the objective function as well as the interiority and uniqueness of the optimal solution. The methods of monotone comparative statics typically dispense with these assumptions. It focuses on the main property underpinning monotone comparative statics, which is a form of complementarity between the endogenous variable and exogenous parameter. Roughly speaking, a maximization problem displays complementarity if a higher value of the exogenous parameter increases the marginal return of the endogenous variable. This guarantees that the set of solutions to the optimization problem is increasing with respect to the exogenous parameter.
In dual decomposition a problem is broken into smaller subproblems and a solution to the relaxed problem is found. This method can be employed for MRF optimization. Dual decomposition is applied to markov logic programs as an inference technique.