Stochastic ordering

Last updated

In probability theory and statistics, a stochastic order quantifies the concept of one random variable being "bigger" than another. These are usually partial orders, so that one random variable may be neither stochastically greater than, less than, nor equal to another random variable . Many different orders exist, which have different applications.

Contents

Usual stochastic order

A real random variable is less than a random variable in the "usual stochastic order" if

where denotes the probability of an event. This is sometimes denoted or . If additionally for some , then is stochastically strictly less than , sometimes denoted . In decision theory, under this circumstance B is said to be first-order stochastically dominant over A.

Characterizations

The following rules describe situations when one random variable is stochastically less than or equal to another. Strict version of some of these rules also exist.

  1. if and only if for all non-decreasing functions , .
  2. If is non-decreasing and then
  3. If is increasing in each variable and and are independent sets of random variables with for each , then and in particular Moreover, the th order statistics satisfy .
  4. If two sequences of random variables and , with for all each converge in distribution, then their limits satisfy .
  5. If , and are random variables such that and for all and such that , then .

Other properties

If and then (the random variables are equal in distribution).

Stochastic dominance

Stochastic dominance relations are a family of stochastic orderings used in decision theory: [1]

There also exist higher-order notions of stochastic dominance. With the definitions above, we have .

Multivariate stochastic order

An -valued random variable is less than an -valued random variable in the "usual stochastic order" if

Other types of multivariate stochastic orders exist. For instance the upper and lower orthant order which are similar to the usual one-dimensional stochastic order. is said to be smaller than in upper orthant order if

and is smaller than in lower orthant order if [2]

All three order types also have integral representations, that is for a particular order is smaller than if and only if for all in a class of functions . [3] is then called generator of the respective order.

Other dominance orders

The following stochastic orders are useful in the theory of random social choice. They are used to compare the outcomes of random social choice functions, in order to check them for efficiency or other desirable criteria. [4] The dominance orders below are ordered from the most conservative to the least conservative. They are exemplified on random variables over the finite support {30,20,10}.

Deterministic dominance, denoted , means that every possible outcome of is at least as good as every possible outcome of : for all x<y, . In other words: . For example, .

Bilinear dominance, denoted , means that, for every possible outcome, the probability that yields the better one and yields the worse one is at least as large as the probability the other way around: for all x<y, For example, .

Stochastic dominance (already mentioned above), denoted , means that, for every possible outcome x, the probability that yields at least x is at least as large as the probability that yields at least x: for all x, . For example, .

Pairwise-comparison dominance, denoted , means that the probability that that yields a better outcome than is larger than the other way around: . For example, .

Downward-lexicographic dominance, denoted , means that has a larger probability than of returning the best outcome, or both and have the same probability to return the best outcome but has a larger probability than of returning the second-best best outcome, etc. Upward-lexicographic dominance is defined analogously based on the probability to return the worst outcomes. See lexicographic dominance.

Other stochastic orders

Hazard rate order

The hazard rate of a non-negative random variable with absolutely continuous distribution function and density function is defined as

Given two non-negative variables and with absolutely continuous distribution and , and with hazard rate functions and , respectively, is said to be smaller than in the hazard rate order (denoted as ) if

for all ,

or equivalently if

is decreasing in .

Likelihood ratio order

Let and two continuous (or discrete) random variables with densities (or discrete densities) and , respectively, so that increases in over the union of the supports of and ; in this case, is smaller than in the likelihood ratio order ().

Variability orders

If two variables have the same mean, they can still be compared by how "spread out" their distributions are. This is captured to a limited extent by the variance, but more fully by a range of stochastic orders.[ citation needed ]

Convex order

Convex order is a special kind of variability order. Under the convex ordering, is less than if and only if for all convex , .

Laplace transform order

Laplace transform order compares both size and variability of two random variables. Similar to convex order, Laplace transform order is established by comparing the expectation of a function of the random variable where the function is from a special class: . This makes the Laplace transform order an integral stochastic order with the generator set given by the function set defined above with a positive real number.

Realizable monotonicity

Considering a family of probability distributions on partially ordered space indexed with (where is another partially ordered space, the concept of complete or realizable monotonicity may be defined. It means, there exists a family of random variables on the same probability space, such that the distribution of is and almost surely whenever . It means the existence of a monotone coupling. See [5]

See also

Related Research Articles

<span class="mw-page-title-main">Cumulative distribution function</span> Probability that random variable X is less than or equal to x

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

<span class="mw-page-title-main">Probability distribution</span> Mathematical function for the probability a given outcome occurs in an experiment

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' can be misleading as its mathematical definition is not actually random nor a variable, but rather it is a function from possible outcomes in a sample space to a measurable space, often to the real numbers.

<span class="mw-page-title-main">Inverse transform sampling</span> Basic method for pseudo-random number sampling

Inverse transform sampling is a basic method for pseudo-random number sampling, i.e., for generating sample numbers at random from any probability distribution given its cumulative distribution function.

In probability theory, there exist several different notions of convergence of sequences of random variables. The different notions of convergence capture different properties about the sequence, with some notions of convergence being stronger than others. For example, convergence in distribution tells us about the limit distribution of a sequence of random variables. This is a weaker notion than convergence in probability, which tells us about the value a random variable will take, rather than just the distribution.

In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant.

In probability theory, the Vysochanskij–Petunin inequality gives a lower bound for the probability that a random variable with finite variance lies within a certain number of standard deviations of the variable's mean, or equivalently an upper bound for the probability that it lies further away. The sole restrictions on the distribution are that it be unimodal and have finite variance; here unimodal implies that it is a continuous probability distribution except at the mode, which may have a non-zero probability.

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols.

Stochastic dominance is a partial order between random variables. It is a form of stochastic ordering. The concept arises in decision theory and decision analysis in situations where one gamble can be ranked as superior to another gamble for a broad class of decision-makers. It is based on shared preferences regarding sets of possible outcomes and their associated probabilities. Only limited knowledge of preferences is required for determining dominance. Risk aversion is a factor only in second order stochastic dominance.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

<span class="mw-page-title-main">Dvoretzky–Kiefer–Wolfowitz inequality</span> Statistical inequality

In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality provides a bound on the worst case distance of an empirically determined distribution function from its associated population distribution function. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality

In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences.

In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics, due to Cees M. Fortuin, Pieter W. Kasteleyn, and Jean Ginibre. Informally, it says that in many random systems, increasing events are positively correlated, while an increasing and a decreasing event are negatively correlated. It was obtained by studying the random cluster model.

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

Monotone comparative statics is a sub-field of comparative statics that focuses on the conditions under which endogenous variables undergo monotone changes when there is a change in the exogenous parameters. Traditionally, comparative results in economics are obtained using the Implicit Function Theorem, an approach that requires the concavity and differentiability of the objective function as well as the interiority and uniqueness of the optimal solution. The methods of monotone comparative statics typically dispense with these assumptions. It focuses on the main property underpinning monotone comparative statics, which is a form of complementarity between the endogenous variable and exogenous parameter. Roughly speaking, a maximization problem displays complementarity if a higher value of the exogenous parameter increases the marginal return of the endogenous variable. This guarantees that the set of solutions to the optimization problem is increasing with respect to the exogenous parameter.

In economics, the Debreu's theorems are preference representation theorems—statements about the representation of a preference ordering by a real-valued utility function. The theorems were proved by Gerard Debreu during the 1950s.

In utility theory, the responsive set (RS) extension is an extension of a preference-relation on individual items, to a partial preference-relation of item-bundles.

References

  1. Perrakis, Stylianos (2019). Stochastic Dominance Option Pricing. Palgrave Macmillan, Cham. doi:10.1007/978-3-030-11590-6_1. ISBN   978-3-030-11589-0.
  2. Definition 2.3 in Thibaut Lux, Antonin Papapantoleon: "Improved Fréchet-Hoeffding bounds for d-copulas and applications in model-free finance." Annals of Applied Probability 27, 3633-3671, 2017
  3. Alfred Müller, Dietrich Stoyan: Comparison methods for stochastic models and risks. Wiley, Chichester 2002, ISBN   0-471-49446-1, S. 2.
  4. Felix Brandt (2017-10-26). "Roling the Dice: Recent Results in Probabilistic Social Choice". In Endriss, Ulle (ed.). Trends in Computational Social Choice. Lulu.com. ISBN   978-1-326-91209-3.
  5. Stochastic Monotonicity and Realizable Monotonicity James Allen Fill and Motoya Machida, The Annals of Probability, Vol. 29, No. 2 (Apr., 2001), pp. 938-978, Published by: Institute of Mathematical Statistics, Stable URL: https://www.jstor.org/stable/2691998

Bibliography