Markov's inequality

Last updated
Markov's inequality gives an upper bound for the measure of the set (indicated in red) where
f
(
x
)
{\displaystyle f(x)}
exceeds a given level
e
{\displaystyle \varepsilon }
. The bound combines the level
e
{\displaystyle \varepsilon }
with the average value of
f
{\displaystyle f}
. Markov Inequality.svg
Markov's inequality gives an upper bound for the measure of the set (indicated in red) where exceeds a given level . The bound combines the level with the average value of .

In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive constant. Markov's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality. [1]

Contents

It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev inequality) or Bienaymé's inequality.

Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable. Markov's inequality can also be used to upper bound the expectation of a non-negative random variable in terms of its distribution function.

Statement

If X is a nonnegative random variable and a > 0, then the probability that X is at least a is at most the expectation of X divided by a: [1]

When , we can take for to rewrite the previous inequality as

In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space, is a measurable extended real-valued function, and ε > 0, then

This measure-theoretic definition is sometimes referred to as Chebyshev's inequality. [2]


Extended version for nondecreasing functions

If φ is a nondecreasing nonnegative function, X is a (not necessarily nonnegative) random variable, and φ(a) > 0, then [3]

An immediate corollary, using higher moments of X supported on values larger than 0, is


The uniformly randomized Markov's inequality

If X is a nonnegative random variable and a > 0, and U is a uniformly distributed random variable on that is independent of X, then [4]

Since U is almost surely smaller than one, this bound is strictly stronger than Markov's inequality. Remarkably, U cannot be replaced by any constant smaller than one, meaning that deterministic improvements to Markov's inequality cannot exist in general. While Markov's inequality holds with equality for distributions supported on , the above randomized variant holds with equality for any distribution that is bounded on .


Proofs

We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.

Intuition

where is larger than or equal to 0 as the random variable is non-negative and is larger than or equal to because the conditional expectation only takes into account of values larger than or equal to which r.v. can take.

Property 1:

Given a non-negative random variable , the conditional expectation because . Also, probabilities are always non-negative, i.e., . Thus, the product:

.

This is intuitive since conditioning on still results in non-negative values, ensuring the product remains non-negative.

Property 2:

For , the expected value given is at least . Multiplying both sides by , we get:

.

This is intuitive since all values considered are at least , making their average also greater than or equal to .

Hence intuitively, , which directly leads to .

Probability-theoretic proof

Method 1: From the definition of expectation:

However, X is a non-negative random variable thus,

From this we can derive,

From here, dividing through by allows us to see that

Method 2: For any event , let be the indicator random variable of , that is, if occurs and otherwise.

Using this notation, we have if the event occurs, and if . Then, given ,

which is clear if we consider the two possible values of . If , then , and so . Otherwise, we have , for which and so .

Since is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore,

Now, using linearity of expectations, the left side of this inequality is the same as

Thus we have

and since a > 0, we can divide both sides by a.

Measure-theoretic proof

We may assume that the function is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued function s on X given by

Then . By the definition of the Lebesgue integral

and since , both sides can be divided by , obtaining

Discrete case

We now provide a proof for the special case when is a discrete random variable which only takes on non-negative integer values.

Let be a positive integer. By definition

Dividing by yields the desired result.

Corollaries

Chebyshev's inequality

Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically,

for any a > 0. [3] Here Var(X) is the variance of X, defined as:

Chebyshev's inequality follows from Markov's inequality by considering the random variable

and the constant for which Markov's inequality reads

This argument can be summarized (where "MI" indicates use of Markov's inequality):

Other corollaries

  1. The "monotonic" result can be demonstrated by:
  2. The result that, for a nonnegative random variable X, the quantile function of X satisfies:
    the proof using
  3. Let be a self-adjoint matrix-valued random variable and . Then
    which can be proved similarly. [5]

Examples

Assuming no income is negative, Markov's inequality shows that no more than 10% (1/10) of the population can have more than 10 times the average income. [6]

Another simple example is as follows: Andrew makes 4 mistakes on average on his Statistics course tests. The best upper bound on the probability that Andrew will do at least 10 mistakes is 0.4 as Note that Andrew might do exactly 10 mistakes with probability 0.4 and make no mistakes with probability 0.6; the expectation is exactly 4 mistakes.

See also

Related Research Articles

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.

<span class="mw-page-title-main">Wiener process</span> Stochastic process generalizing Brownian motion

In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.

In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.

In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.

In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963.

<span class="mw-page-title-main">Consistent estimator</span> Statistical estimator converging in probability to a true parameter as sample size increases

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In the mathematical theory of probability, a Doob martingale is a stochastic process that approximates a given random variable and has the martingale property with respect to the given filtration. It may be thought of as the evolving sequence of best approximations to the random variable based on information accumulated up to a certain time.

In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and −1 with probability 1/2, then for every positive ,

In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In probability theory, the multidimensional Chebyshev's inequality is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount.

The order in probability notation is used in probability theory and statistical theory in direct parallel to the big O notation that is standard in mathematics. Where the big O notation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals with convergence of sets of random variables, where convergence is in the sense of convergence in probability.

Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory. It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities. Uniform convergence in probability has applications to statistics as well as machine learning as part of statistical learning theory.

This article is supplemental for “Convergence of random variables” and provides proofs for selected results.

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value. The deviation or other function of the random variable can be thought of as a secondary random variable. The simplest example of the concentration of such a secondary random variable is the CDF of the first random variable which concentrates the probability to unity. If an analytic form of the CDF is available this provides a concentration equality that provides the exact probability of concentration. It is precisely when the CDF is difficult to calculate or even the exact form of the first random variable is unknown that the applicable concentration inequalities provide useful insight.

References

  1. 1 2 Huber, Mark (2019-11-26). "Halving the Bounds for the Markov, Chebyshev, and Chernoff Inequalities Using Smoothing". The American Mathematical Monthly. 126 (10): 915–927. arXiv: 1803.06361 . doi:10.1080/00029890.2019.1656484. ISSN   0002-9890.
  2. Stein, E. M.; Shakarchi, R. (2005), Real Analysis, Princeton Lectures in Analysis, vol. 3 (1st ed.), p. 91.
  3. 1 2 Lin, Zhengyan (2010). Probability inequalities. Springer. p. 52.
  4. Ramdas, Aaditya; Manole, Tudor, Randomized and Exchangeable Improvements of Markov's, Chebyshev's and Chernoff's Inequalities, arXiv: 2304.02611 .
  5. Tu, Stephen (2017-11-04). "Markov's Inequality for Matrices" . Retrieved May 27, 2024.
  6. Ross, Kevin. 5.4 Probability inequalitlies | An Introduction to Probability and Simulation.