This article includes a list of general references, but it lacks sufficient corresponding inline citations .(August 2013) |
In multivariable calculus, an iterated limit is a limit of a sequence or a limit of a function in the form
or other similar forms.
An iterated limit is only defined for an expression whose value depends on at least two variables. To evaluate such a limit, one takes the limiting process as one of the two variables approaches some number, getting an expression whose value depends only on the other variable, and then one takes the limit as the other variable approaches some number.
This section introduces definitions of iterated limits in two variables. These may generalize easily to multiple variables.
For each , let be a real double sequence. Then there are two forms of iterated limits, namely
For example, let
Then
Let . Then there are also two forms of iterated limits, namely
For example, let such that
Then
The limit(s) for x and/or y can also be taken at infinity, i.e.,
For each , let be a sequence of functions. Then there are two forms of iterated limits, namely
For example, let such that
Then
The limit in x can also be taken at infinity, i.e.,
For example, let such that
Then
Note that the limit in n is taken discretely, while the limit in x is taken continuously.
This section introduces various definitions of limits in two variables. These may generalize easily to multiple variables.
For a double sequence , there is another definition of limit, which is commonly referred to as double limit, denote by
which means that for all , there exist such that implies . [3]
The following theorem states the relationship between double limit and iterated limits.
Proof. By existence of for any , there exists such that implies .
Let each such that exists, there exists such that implies .
Both the above statements are true for and . Combining equations from the above two, for any there exists for all ,
,
which proves that . Similarly for , we prove: .
For example, let
Since , , and , we have
This theorem requires the single limits and to converge. This condition cannot be dropped. For example, consider
Then we may see that
This is because does not exist in the first place.
For a two-variable function , there are two other types of limits. One is the ordinary limit, denoted by
which means that for all , there exist such that implies . [6]
For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b). In this definition, the point (a, b) is excluded from the paths. Therefore, the value of f at the point (a, b), even if it is defined, does not affect the limit.
The other type is the double limit, denoted by
which means that for all , there exist such that and implies . [7]
For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b), except the lines x=a and y=b. In other words, the value of f along the lines x=a and y=b does not affect the limit. This is different from the ordinary limit where only the point (a, b) is excluded. In this sense, ordinary limit is a stronger notion than double limit:
Both of these limits do not involve first taking one limit and then another. This contrasts with iterated limits where the limiting process is taken in x-direction first, and then in y-direction (or in reversed order).
The following theorem states the relationship between double limit and iterated limits:
For example, let
Since , and , we have
(Note that in this example, does not exist.)
This theorem requires the single limits and to exist. This condition cannot be dropped. For example, consider
Then we may see that
This is because does not exist for x near 0 in the first place.
Combining Theorem 2 and 3, we have the following corollary:
For a two-variable function , we may also define the double limit at infinity
which means that for all , there exist such that and implies .
Similar definitions may be given for limits at negative infinity.
The following theorem states the relationship between double limit at infinity and iterated limits at infinity:
For example, let
Since , and , we have
Again, this theorem requires the single limits and to exist. This condition cannot be dropped. For example, consider
Then we may see that
This is because does not exist for fixed y in the first place.
The converses of Theorems 1, 3 and 4 do not hold, i.e., the existence of iterated limits, even if they are equal, does not imply the existence of the double limit. A counter-example is
near the point (0, 0). On one hand,
On the other hand, the double limit does not exist. This can be seen by taking the limit along the path (x, y) = (t, t) → (0,0), which gives
and along the path (x, y) = (t, t2) → (0,0), which gives
In the examples above, we may see that interchanging limits may or may not give the same result. A sufficient condition for interchanging limits is given by the Moore-Osgood theorem. [8] The essence of the interchangeability depends on uniform convergence.
The following theorem allows us to interchange two limits of sequences.
A corollary is about the interchangeability of infinite sum.
Similar results hold for multivariable functions.
Note that this theorem does not imply the existence of . A counter-example is near (0,0). [10]
An important variation of Moore-Osgood theorem is specifically for sequences of functions.
A corollary is the continuity theorem for uniform convergence as follows:
Another corollary is about the interchangeability of limit and infinite sum.
Consider a matrix of infinite entries
Suppose we would like to find the sum of all entries. If we sum it column by column first, we will find that the first column gives 1, while all others give 0. Hence the sum of all columns is 1. However, if we sum it row by row first, it will find that all rows give 0. Hence the sum of all rows is 0.
The explanation for this paradox is that the vertical sum to infinity and horizontal sum to infinity are two limiting processes that cannot be interchanged. Let be the sum of entries up to entries (n, m). Then we have , but . In this case, the double limit does not exist, and thus this problem is not well-defined.
By the integration theorem for uniform convergence, once we have converges uniformly on , the limit in n and an integration over a bounded interval can be interchanged:
However, such a property may fail for an improper integral over an unbounded interval . In this case, one may rely on the Moore-Osgood theorem.
Consider as an example.
We first expand the integrand as for . (Here x=0 is a limiting case.)
One can prove by calculus that for and , we have . By Weierstrass M-test, converges uniformly on .
Then by the integration theorem for uniform convergence, .
To further interchange the limit with the infinite summation , the Moore-Osgood theorem requires the infinite series to be uniformly convergent.
Note that . Again, by Weierstrass M-test, converges uniformly on .
Then by the Moore-Osgood theorem, . (Here is the Riemann zeta function.)
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set as the function domain if, given any arbitrarily small positive number , a number can be found such that each of the functions differs from by no more than at every pointin. Described in an informal way, if converges to uniformly, then how quickly the functions approach is "uniform" throughout in the following sense: in order to guarantee that differs from by less than a chosen distance , we only need to make sure that is larger than or equal to a certain , which we can find without knowing the value of in advance. In other words, there exists a number that could depend on but is independent of , such that choosing will ensure that for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find such that, for that particular, falls within of whenever .
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.
In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function.
In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol. If such a limit exists and is finite, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted
In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. It is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time.
In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following:
In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).
In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency domain expressions to the time domain behavior as time approaches infinity. Mathematically, if in continuous time has (unilateral) Laplace transform , then a final value theorem establishes conditions under which Likewise, if in discrete time has (unilateral) Z-transform , then a final value theorem establishes conditions under which
In mathematics, a limit is the value that a function approaches as the argument approaches some value. Limits of functions are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals. The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist.
In functional analysis, the Fréchet–Kolmogorov theorem gives a necessary and sufficient condition for a set of functions to be relatively compact in an Lp space. It can be thought of as an Lp version of the Arzelà–Ascoli theorem, from which it can be deduced. The theorem is named after Maurice René Fréchet and Andrey Kolmogorov.
In mathematics, Kingman's subadditive ergodic theorem is one of several ergodic theorems. It can be seen as a generalization of Birkhoff's ergodic theorem. Intuitively, the subadditive ergodic theorem is a kind of random variable version of Fekete's lemma. As a result, it can be rephrased in the language of probability, e.g. using a sequence of random variables and expected values. The theorem is named after John Kingman.
In optics, the Ewald–Oseen extinction theorem, sometimes referred to as just the extinction theorem, is a theorem that underlies the common understanding of scattering. It is named after Paul Peter Ewald and Carl Wilhelm Oseen, who proved the theorem in crystalline and isotropic media, respectively, in 1916 and 1915. Originally, the theorem applied to scattering by an isotropic dielectric objects in free space. The scope of the theorem was greatly extended to encompass a wide variety of bianisotropic media.