Weakly dependent random variables

Last updated

In probability, weak dependence of random variables is a generalization of independence that is weaker than the concept of a martingale [ citation needed ]. A (time) sequence of random variables is weakly dependent if distinct portions of the sequence have a covariance that asymptotically decreases to 0 as the blocks are further separated in time. Weak dependence primarily appears as a technical condition in various probabilistic limit theorems.

Contents

Formal definition

Fix a set S, a sequence of sets of measurable functions , a decreasing sequence , and a function . A sequence of random variables is -weakly dependent iff, for all , for all , and , we have [1] :315

Note that the covariance does not decay to 0 uniformly in d and e. [2] :9

Common applications

Weak dependence is a sufficient weak condition that many natural instances of stochastic processes exhibit it. [2] :9 In particular, weak dependence is a natural condition for the ergodic theory of random functions. [3]

A sufficient substitute for independence in the Lindeberg–Lévy central limit theorem is weak dependence. [1] :315 For this reason, specializations often appear in the probability literature on limit theorems. [2] :153–197 These include Withers' condition for strong mixing, [1] [4] Tran's "absolute regularity in the locally transitive sense," [5] and Birkel's "asymptotic quadrant independence." [6]

Weak dependence also functions as a substitute for strong mixing. [7] Again, generalizations of the latter are specializations of the former; an example is Rosenblatt's mixing condition. [8]

Other uses include a generalization of the Marcinkiewicz–Zygmund inequality and Rosenthal inequalities. [1] :314,319

Martingales are weakly dependent [ citation needed ], so many results about martingales also hold true for weakly dependent sequences. An example is Bernstein's bound on higher moments, which can be relaxed to only require [9] [10]

See also

Related Research Articles

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Wiener process</span> Stochastic process generalizing Brownian motion

In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

In mathematical statistics, the Kullback–Leibler divergence, denoted , is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model when the actual distribution is P. While it is a distance, it is not a metric, the most familiar type of distance: it is not symmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.

<span class="mw-page-title-main">Consistent estimator</span> Statistical estimator converging in probability to a true parameter as sample size increases

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

<span class="mw-page-title-main">Characteristic function (probability theory)</span> Fourier transform of the probability density function

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In mathematics, a π-system on a set is a collection of certain subsets of such that

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences.

In statistical decision theory, where we are faced with the problem of estimating a deterministic parameter (vector) from observations an estimator is called minimax if its maximal risk is minimal among all estimators of . In a sense this means that is an estimator which performs best in the worst possible case allowed in the problem.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

References

  1. 1 2 3 4 Doukhan, Paul; Louhichi, Sana (1999-12-01). "A new weak dependence condition and applications to moment inequalities". Stochastic Processes and Their Applications. 84 (2): 313–342. doi: 10.1016/S0304-4149(99)00055-1 . ISSN   0304-4149.
  2. 1 2 3 Dedecker, Jérôme; Doukhan, Paul; Lang, Gabriel; Louhichi, Sana; Leon, José Rafael; José Rafael, León R.; Prieur, Clémentine (2007). Weak Dependence: With Examples and Applications. Lecture Notes in Statistics. Vol. 190. doi:10.1007/978-0-387-69952-3. ISBN   978-0-387-69951-6.
  3. Wu, Wei Biao; Shao, Xiaofeng (June 2004). "Limit theorems for iterated random functions". Journal of Applied Probability. 41 (2): 425–436. doi:10.1239/jap/1082999076. ISSN   0021-9002. S2CID   335616.
  4. Withers, C. S. (December 1981). "Conditions for linear processes to be strong-mixing". Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete. 57 (4): 477–480. doi: 10.1007/bf01025869 . ISSN   0044-3719. S2CID   122082639.
  5. Tran, Lanh Tat (1990). "Recursive kernel density estimators under a weak dependence condition". Annals of the Institute of Statistical Mathematics. 42 (2): 305–329. doi:10.1007/bf00050839. ISSN   0020-3157. S2CID   120632192.
  6. Birkel, Thomas (1992-07-11). "Laws of large numbers under dependence assumptions". Statistics & Probability Letters. 14 (5): 355–362. doi:10.1016/0167-7152(92)90096-N. ISSN   0167-7152.
  7. Wu, Wei Biao (2005-10-04). "Nonlinear system theory: Another look at dependence". Proceedings of the National Academy of Sciences. 102 (40): 14150–14154. Bibcode:2005PNAS..10214150W. doi: 10.1073/pnas.0506715102 . ISSN   0027-8424. PMC   1242319 . PMID   16179388.
  8. Rosenblatt, M. (1956-01-01). "A Central Limit Theorem and a Strong Mixing Condition". Proceedings of the National Academy of Sciences. 42 (1): 43–47. Bibcode:1956PNAS...42...43R. doi: 10.1073/pnas.42.1.43 . ISSN   0027-8424. PMC   534230 . PMID   16589813.
  9. Fan, X.; Grama, I.; Liu, Q. (2015). "Exponential inequalities for martingales with applications". Electronic Journal of Probability. 20: 1–22. arXiv: 1311.6273 . doi:10.1214/EJP.v20-3496. S2CID   119713171.
  10. Bernstein, Serge (December 1927). "Sur l'extension du théorème limite du calcul des probabilités aux sommes de quantités dépendantes". Mathematische Annalen (in French). 97 (1): 1–59. doi:10.1007/bf01447859. ISSN   0025-5831. S2CID   122172457.