A request that this article title be changed to Limit inferior and limit superior is under discussion. Please do not move this article until the discussion is closed. |
This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations . (February 2019) (Learn how and when to remove this template message) |
In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting (i.e., eventual and extreme) bounds on the sequence. They can be thought of in a similar fashion for a function (see limit of a function). For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, limit infimum, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit.
The limit inferior of a sequence is denoted by
The limit superior of a sequence is denoted by
The limit inferior of a sequence (xn) is defined by
or
Similarly, the limit superior of (xn) is defined by
or
Alternatively, the notations and are sometimes used.
The limits superior and inferior can equivalently be defined using the concept of subsequential limits of the sequence . [1] An element of the extended real numbers is a subsequential limit of if there exists a strictly increasing sequence of natural numbers such that . If is the set of all subsequential limits of , then
and
If the terms in the sequence are real numbers, the limit superior and limit inferior always exist, as the real numbers together with ±∞ (i. e. the extended real number line) are complete. More generally, these definitions make sense in any partially ordered set, provided the suprema and infima exist, such as in a complete lattice.
Whenever the ordinary limit exists, the limit inferior and limit superior are both equal to it; therefore, each can be considered a generalization of the ordinary limit which is primarily interesting in cases where the limit does not exist. Whenever lim inf xn and lim sup xn both exist, we have
Limits inferior/superior are related to big-O notation in that they bound a sequence only "in the limit"; the sequence may exceed the bound. However, with big-O notation the sequence can only exceed the bound in a finite prefix of the sequence, whereas the limit superior of a sequence like e−n may actually be less than all elements of the sequence. The only promise made is that some tail of the sequence can be bounded above by the limit superior plus an arbitrarily small positive constant, and bounded below by the limit inferior minus an arbitrarily small positive constant.
The limit superior and limit inferior of a sequence are a special case of those of a function (see below).
In mathematical analysis, limit superior and limit inferior are important tools for studying sequences of real numbers. Since the supremum and infimum of an unbounded set of real numbers may not exist (the reals are not a complete lattice), it is convenient to consider sequences in the affinely extended real number system: we add the positive and negative infinities to the real line to give the complete totally ordered set [−∞,∞], which is a complete lattice.
Consider a sequence consisting of real numbers. Assume that the limit superior and limit inferior are real numbers (so, not infinite).
The relationship of limit inferior and limit superior for sequences of real numbers is as follows:
As mentioned earlier, it is convenient to extend to [−∞,∞]. Then, (xn) in [−∞,∞] converges if and only if
in which case is equal to their common value. (Note that when working just in , convergence to −∞ or ∞ would not be considered as convergence.) Since the limit inferior is at most the limit superior, the following conditions hold
If and , then the interval [I, S] need not contain any of the numbers xn, but every slight enlargement [I − ε, S + ε] (for arbitrarily small ε > 0) will contain xn for all but finitely many indices n. In fact, the interval [I, S] is the smallest closed interval with this property. We can formalize this property like this: there exist subsequences and of (where and are monotonous) for which we have
On the other hand, there exists a so that for all
To recapitulate:
In general we have that
The liminf and limsup of a sequence are respectively the smallest and greatest cluster points.
Analogously, the limit inferior satisfies superadditivity:
In the particular case that one of the sequences actually converges, say , then the inequalities above become equalities (with or being replaced by ).
and
hold whenever the right-hand side is not of the form .
If exists (including the case ) and , then provided that is not of the form .
and
(This is because the sequence {1,2,3,...} is equidistributed mod 2π, a consequence of the Equidistribution theorem.)
where pn is the n-th prime number. The value of this limit inferior is conjectured to be 2 – this is the twin prime conjecture – but as of April 2014 [update] has only been proven to be less than or equal to 246. [2] The corresponding limit superior is , because there are arbitrary gaps between consecutive primes.
Assume that a function is defined from a subset of the real numbers to the real numbers. As in the case for sequences, the limit inferior and limit superior are always well-defined if we allow the values +∞ and -∞; in fact, if both agree then the limit exists and is equal to their common value (again possibly including the infinities). For example, given f(x) = sin(1/x), we have lim supx→0f(x) = 1 and lim infx→0f(x) = -1. The difference between the two is a rough measure of how "wildly" the function oscillates, and in observation of this fact, it is called the oscillation of f at 0. This idea of oscillation is sufficient to, for example, characterize Riemann-integrable functions as continuous except on a set of measure zero. [3] Note that points of nonzero oscillation (i.e., points at which f is "badly behaved") are discontinuities which, unless they make up a set of zero, are confined to a negligible set.
There is a notion of lim sup and lim inf for functions defined on a metric space whose relationship to limits of real-valued functions mirrors that of the relation between the lim sup, lim inf, and the limit of a real sequence. Take metric spaces X and Y, a subspace E contained in X, and a function f : E → Y. Define, for any limit point a of E,
and
where B(a;ε) denotes the metric ball of radius ε about a.
Note that as ε shrinks, the supremum of the function over the ball is monotone decreasing, so we have
and similarly
This finally motivates the definitions for general topological spaces. Take X, Y, E and a as before, but now let X and Y both be topological spaces. In this case, we replace metric balls with neighborhoods:
(there is a way to write the formula using "lim" using nets and the neighborhood filter). This version is often useful in discussions of semi-continuity which crop up in analysis quite often. An interesting note is that this version subsumes the sequential version by considering sequences as functions from the natural numbers as a topological subspace of the extended real line, into the space (the closure of N in [−∞,∞], the extended real number line, is N ∪ {∞}.)
The power set ℘(X) of a set X is a complete lattice that is ordered by set inclusion, and so the supremum and infimum of any set of subsets (in terms of set inclusion) always exist. In particular, every subset Y of X is bounded above by X and below by the empty set ∅ because ∅ ⊆ Y ⊆ X. Hence, it is possible (and sometimes useful) to consider superior and inferior limits of sequences in ℘(X) (i.e., sequences of subsets of X).
There are two common ways to define the limit of sequences of sets. In both cases:
The difference between the two definitions involves how the topology (i.e., how to quantify separation) is defined. In fact, the second definition is identical to the first when the discrete metric is used to induce the topology on X.
In this case, a sequence of sets approaches a limiting set when the elements of each member of the sequence approach the elements of the limiting set. In particular, if {Xn} is a sequence of subsets of X, then:
The limit lim Xn exists if and only if lim inf Xn and lim sup Xn agree, in which case lim Xn = lim sup Xn = lim inf Xn. [4]
This is the definition used in measure theory and probability. Further discussion and examples from the set-theoretic point of view, as opposed to the topological point of view discussed below, are at set-theoretic limit.
By this definition, a sequence of sets approaches a limiting set when the limiting set includes elements which are in all except finitely many sets of the sequence and does not include elements which are in all except finitely many complements of sets of the sequence. That is, this case specializes the general definition when the topology on set X is induced from the discrete metric.
Specifically, for points x ∈ X and y ∈ X, the discrete metric is defined by
under which a sequence of points {xk} converges to point x ∈ X if and only if xk = x for all except finitely many k. Therefore, if the limit set exists it contains the points and only the points which are in all except finitely many of the sets of the sequence. Since convergence in the discrete metric is the strictest form of convergence (i.e., requires the most), this definition of a limit set is the strictest possible.
If {Xn} is a sequence of subsets of X, then the following always exist:
Observe that x ∈ lim sup Xn if and only if x ∉ lim inf Xnc.
In this sense, the sequence has a limit so long as every point in X either appears in all except finitely many Xn or appears in all except finitely many Xnc. [5]
Using the standard parlance of set theory, set inclusion provides a partial ordering on the collection of all subsets of X that allows set intersection to generate a greatest lower bound and set union to generate a least upper bound. Thus, the infimum or meet of a collection of subsets is the greatest lower bound while the supremum or join is the least upper bound. In this context, the inner limit, lim inf Xn, is the largest meeting of tails of the sequence, and the outer limit, lim sup Xn, is the smallest joining of tails of the sequence. The following makes this precise.
The following are several set convergence examples. They have been broken into sections with respect to the metric used to induce the topology on set X.
The above definitions are inadequate for many technical applications. In fact, the definitions above are specializations of the following definitions.
The limit inferior of a set X ⊆ Y is the infimum of all of the limit points of the set. That is,
Similarly, the limit superior of a set X is the supremum of all of the limit points of the set. That is,
Note that the set X needs to be defined as a subset of a partially ordered set Y that is also a topological space in order for these definitions to make sense. Moreover, it has to be a complete lattice so that the suprema and infima always exist. In that case every set has a limit superior and a limit inferior. Also note that the limit inferior and the limit superior of a set do not have to be elements of the set.
Take a topological space X and a filter base B in that space. The set of all cluster points for that filter base is given by
where is the closure of . This is clearly a closed set and is similar to the set of limit points of a set. Assume that X is also a partially ordered set. The limit superior of the filter base B is defined as
when that supremum exists. When X has a total order, is a complete lattice and has the order topology,
Similarly, the limit inferior of the filter base B is defined as
when that infimum exists; if X is totally ordered, is a complete lattice, and has the order topology, then
If the limit inferior and limit superior agree, then there must be exactly one cluster point and the limit of the filter base is equal to this unique cluster point.
Note that filter bases are generalizations of nets, which are generalizations of sequences. Therefore, these definitions give the limit inferior and limit superior of any net (and thus any sequence) as well. For example, take topological space and the net , where is a directed set and for all . The filter base ("of tails") generated by this net is defined by
Therefore, the limit inferior and limit superior of the net are equal to the limit superior and limit inferior of respectively. Similarly, for topological space , take the sequence where for any with being the set of natural numbers. The filter base ("of tails") generated by this sequence is defined by
Therefore, the limit inferior and limit superior of the sequence are equal to the limit superior and limit inferior of respectively.
In mathematics, more specifically calculus, L'Hôpital's rule or L'Hospital's rule provides a technique to evaluate limits of indeterminate forms. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital. Although the rule is often attributed to L'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.
In mathematics, the infimum of a subset of a partially ordered set is the greatest element in that is less than or equal to all elements of if such an element exists. Consequently, the term greatest lower bound is also commonly used.
In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century. A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law.
In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.
In mathematical analysis, semi-continuity is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function is uppersemi-continuous at a point if, roughly speaking, the function values for arguments near are not much higher than
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.
In mathematics, the limit of a sequence of sets A1, A2, ... is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual.
In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval.
In mathematics and, specifically, real analysis, the Dini derivatives are a class of generalizations of the derivative. They were introduced by Ulisse Dini who studied continuous but nondifferentiable functions, for which he defined the so-called Dini derivatives.
In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time.
In the calculus of variations, Γ-convergence (Gamma-convergence) is a notion of convergence for functionals. It was introduced by Ennio de Giorgi.
In mathematics, the Fatou–Lebesgue theorem establishes a chain of inequalities relating the integrals of the limit inferior and the limit superior of a sequence of functions to the limit inferior and the limit superior of integrals of these functions. The theorem is named after Pierre Fatou and Henri Léon Lebesgue.
In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence in measure, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for n ≥ N to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.
In mathematical analysis, Mosco convergence is a notion of convergence for functionals that is used in nonlinear analysis and set-valued analysis. It is a particular case of Γ-convergence. Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence since it uses both the weak and strong topologies on a topological vector space X. In finite dimensional spaces, Mosco convergence coincides with epi-convergence.
In mathematics, the limit comparison test (LCT) is a method of testing for the convergence of an infinite series.
In mathematics, Schilder's theorem is a result in the large deviations theory of stochastic processes. Roughly speaking, Schilder's theorem gives an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path. This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.
In mathematics, specifically in number theory, the extremal orders of an arithmetic function are best possible bounds of the given arithmetic function. Specifically, if f(n) is an arithmetic function and m(n) is a non-decreasing function that is ultimately positive and
This article is supplemental for “Convergence of random variables” and provides proofs for selected results.
In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.