Stein's method

Last updated

Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. It was introduced by Charles Stein, who first published it in 1972, [1] to obtain a bound between the distribution of a sum of -dependent sequence of random variables and a standard normal distribution in the Kolmogorov (uniform) metric and hence to prove not only a central limit theorem, but also bounds on the rates of convergence for the given metric.

Contents

History

At the end of the 1960s, unsatisfied with the by-then known proofs of a specific central limit theorem, Charles Stein developed a new way of proving the theorem for his statistics lecture. [2] His seminal paper was presented in 1970 at the sixth Berkeley Symposium and published in the corresponding proceedings. [1]

Later, his Ph.D. student Louis Chen Hsiao Yun modified the method so as to obtain approximation results for the Poisson distribution; [3] therefore the Stein method applied to the problem of Poisson approximation is often referred to as the Stein–Chen method.

Probably the most important contributions are the monograph by Stein (1986), where he presents his view of the method and the concept of auxiliary randomisation, in particular using exchangeable pairs, and the articles by Barbour (1988) and Götze (1991), who introduced the so-called generator interpretation, which made it possible to easily adapt the method to many other probability distributions. An important contribution was also an article by Bolthausen (1984) on the so-called combinatorial central limit theorem.[ citation needed ]

In the 1990s the method was adapted to a variety of distributions, such as Gaussian processes by Barbour (1990), the binomial distribution by Ehm (1991), Poisson processes by Barbour and Brown (1992), the Gamma distribution by Luk (1994), and many others.

The method gained further popularity in the machine learning community in the mid 2010s, following the development of computable Stein discrepancies and the diverse applications and algorithms based on them.

The basic approach

Probability metrics

Stein's method is a way to bound the distance between two probability distributions using a specific probability metric.

Let the metric be given in the form

Here, and are probability measures on a measurable space , and are random variables with distribution and respectively, is the usual expectation operator and is a set of functions from to the set of real numbers. Set has to be large enough, so that the above definition indeed yields a metric.

Important examples are the total variation metric, where we let consist of all the indicator functions of measurable sets, the Kolmogorov (uniform) metric for probability measures on the real numbers, where we consider all the half-line indicator functions, and the Lipschitz (first order Wasserstein; Kantorovich) metric, where the underlying space is itself a metric space and we take the set to be all Lipschitz-continuous functions with Lipschitz-constant 1. However, note that not every metric can be represented in the form (1.1).

In what follows is a complicated distribution (e.g., the distribution of a sum of dependent random variables), which we want to approximate by a much simpler and tractable distribution (e.g., the standard normal distribution).

The Stein operator

We assume now that the distribution is a fixed distribution; in what follows we shall in particular consider the case where is the standard normal distribution, which serves as a classical example.

First of all, we need an operator , which acts on functions from to the set of real numbers and 'characterizes' distribution in the sense that the following equivalence holds:

We call such an operator the Stein operator.

For the standard normal distribution, Stein's lemma yields such an operator:

Thus, we can take

There are in general infinitely many such operators and it still remains an open question, which one to choose. However, it seems that for many distributions there is a particular good one, like (2.3) for the normal distribution.

There are different ways to find Stein operators. [4]

The Stein equation

is close to with respect to if the difference of expectations in (1.1) is close to 0. We hope now that the operator exhibits the same behavior: if then , and hopefully if we have .

It is usually possible to define a function such that

We call (3.1) the Stein equation. Replacing by and taking expectation with respect to , we get

Now all the effort is worthwhile only if the left-hand side of (3.2) is easier to bound than the right hand side. This is, surprisingly, often the case.

If is the standard normal distribution and we use (2.3), then the corresponding Stein equation is

If probability distribution Q has an absolutely continuous (with respect to the Lebesgue measure) density q, then [4]

Solving the Stein equation

Analytic methods. Equation (3.3) can be easily solved explicitly:

Generator method. If is the generator of a Markov process (see Barbour (1988), Götze (1991)), then the solution to (3.2) is

where denotes expectation with respect to the process being started in . However, one still has to prove that the solution (4.2) exists for all desired functions .

Properties of the solution to the Stein equation

Usually, one tries to give bounds on and its derivatives (or differences) in terms of and its derivatives (or differences), that is, inequalities of the form

for some specific (typically or , respectively, depending on the form of the Stein operator), where often is the supremum norm. Here, denotes the differential operator, but in discrete settings it usually refers to a difference operator. The constants may contain the parameters of the distribution . If there are any, they are often referred to as Stein factors.

In the case of (4.1) one can prove for the supremum norm that

where the last bound is of course only applicable if is differentiable (or at least Lipschitz-continuous, which, for example, is not the case if we regard the total variation metric or the Kolmogorov metric!). As the standard normal distribution has no extra parameters, in this specific case the constants are free of additional parameters.

If we have bounds in the general form (5.1), we usually are able to treat many probability metrics together. One can often start with the next step below, if bounds of the form (5.1) are already available (which is the case for many distributions).

An abstract approximation theorem

We are now in a position to bound the left hand side of (3.1). As this step heavily depends on the form of the Stein operator, we directly regard the case of the standard normal distribution.

At this point we could directly plug in random variable , which we want to approximate, and try to find upper bounds. However, it is often fruitful to formulate a more general theorem. Consider here the case of local dependence.

Assume that is a sum of random variables such that the and variance . Assume that, for every , there is a set , such that is independent of all the random variables with . We call this set the 'neighborhood' of . Likewise let be a set such that all with are independent of all , . We can think of as the neighbors in the neighborhood of , a second-order neighborhood, so to speak. For a set define now the sum .

Using Taylor expansion, it is possible to prove that

Note that, if we follow this line of argument, we can bound (1.1) only for functions where is bounded because of the third inequality of (5.2) (and in fact, if has discontinuities, so will ). To obtain a bound similar to (6.1) which contains only the expressions and , the argument is much more involved and the result is not as simple as (6.1); however, it can be done.

Theorem A. If is as described above, we have for the Lipschitz metric that

Proof. Recall that the Lipschitz metric is of the form (1.1) where the functions are Lipschitz-continuous with Lipschitz-constant 1, thus . Combining this with (6.1) and the last bound in (5.2) proves the theorem.

Thus, roughly speaking, we have proved that, to calculate the Lipschitz-distance between a with local dependence structure and a standard normal distribution, we only need to know the third moments of and the size of the neighborhoods and .

Application of the theorem

We can treat the case of sums of independent and identically distributed random variables with Theorem A.

Assume that , and . We can take . From Theorem A we obtain that

For sums of random variables another approach related to Steins Method is known as the zero bias transform.

Connections to other methods

See also

Notes

  1. 1 2 Stein, C. (1972). "A bound for the error in the normal approximation to the distribution of a sum of dependent random variables". Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2. Vol. 6. University of California Press. pp. 583–602. MR   0402873. Zbl   0278.60026.
  2. Charles Stein: The Invariant, the Direct and the "Pretentious" Archived 2007-07-05 at the Wayback Machine . Interview given in 2003 in Singapore
  3. Chen, L.H.Y. (1975). "Poisson approximation for dependent trials". Annals of Probability. 3 (3): 534–545. doi: 10.1214/aop/1176996359 . JSTOR   2959474. MR   0428387. Zbl   0335.60016.
  4. 1 2 Novak, S.Y. (2011). Extreme Value Methods with Applications to Finance. Monographs on Statistics and Applied Probability. Vol. 122. CRC Press. Ch. 12. ISBN   978-1-43983-574-6.

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

In mathematics, the Laplace transform, named after Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable .

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

In mathematics, the Banach fixed-point theorem is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922.

In probability theory, there exist several different notions of convergence of sequences of random variables, including convergence in probability, convergence in distribution, and almost sure convergence. The different notions of convergence capture different properties about the sequence, with some notions of convergence being stronger than others. For example, convergence in distribution tells us about the limit distribution of a sequence of random variables. This is a weaker notion than convergence in probability, which tells us about the value a random variable will take, rather than just the distribution.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice theory. The theorem gives a formula for the covariance of one random variable with the value of a function of another, when the two random variables are jointly normally distributed.

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.

In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how one reference probability distribution P is different from a second probability distribution Q. Mathematically, it is defined as

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

<span class="mw-page-title-main">Donsker's theorem</span> Statement in probability theory

In probability theory, Donsker's theorem, named after Monroe D. Donsker, is a functional extension of the central limit theorem for empirical distribution functions. Specifically, the theorem states that an appropriately centered and scaled version of the empirical distribution function converges to a Gaussian process.

In mathematics, a π-system on a set is a collection of certain subsets of such that

In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

Poisson-type random measures are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under thinning. They are the only distributions in the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The PT family of distributions is also known as the Katz family of distributions, the Panjer or (a,b,0) class of distributions and may be retrieved through the Conway–Maxwell–Poisson distribution.

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is a convex space, it is not a vector space.

This is a glossary of concepts and results in real analysis and complex analysis in mathematics.

References

Literature

The following text is advanced, and gives a comprehensive overview of the normal case

Another advanced book, but having some introductory character, is

A standard reference is the book by Stein,

which contains a lot of interesting material, but may be a little hard to understand at first reading.

Despite its age, there are few standard introductory books about Stein's method available. The following recent textbook has a chapter (Chapter 2) devoted to introducing Stein's method:

Although the book

is by large parts about Poisson approximation, it contains nevertheless a lot of information about the generator approach, in particular in the context of Poisson process approximation.

The following textbook has a chapter (Chapter 10) devoted to introducing Stein's method of Poisson approximation: