Iterated limit

Last updated

In multivariable calculus, an iterated limit is a limit of a sequence or a limit of a function in the form

Contents

,
,

or other similar forms.

An iterated limit is only defined for an expression whose value depends on at least two variables. To evaluate such a limit, one takes the limiting process as one of the two variables approaches some number, getting an expression whose value depends only on the other variable, and then one takes the limit as the other variable approaches some number.

Types of iterated limits

This section introduces definitions of iterated limits in two variables. These may generalize easily to multiple variables.

Iterated limit of sequence

For each , let be a real double sequence. Then there are two forms of iterated limits, namely

.

For example, let

.

Then

, and
.

Iterated limit of function

Let . Then there are also two forms of iterated limits, namely

.

For example, let such that

.

Then

, and
. [1]

The limit(s) for x and/or y can also be taken at infinity, i.e.,

.

Iterated limit of sequence of functions

For each , let be a sequence of functions. Then there are two forms of iterated limits, namely

.

For example, let such that

.

Then

, and
. [2]

The limit in x can also be taken at infinity, i.e.,

.

Note that the limit in n is taken discretely, while the limit in x is taken continuously.

Comparison with other limits in multiple variables

This section introduces various definitions of limits in two variables. These may generalize easily to multiple variables.

Limit of sequence

For a double sequence , there is another definition of limit, which is commonly referred to as double limit, denote by

,

which means that for all , there exist such that implies . [3]

The following theorem states the relationship between double limit and iterated limits.

Theorem 1. If exists and equals L, exists for each large m, and exists for each large n, then and also exist, and they equal L, i.e.,
. [4] [5]

Proof. By existence of for any , there exists such that implies .

Let each such that exists, there exists such that implies .

Both the above statements are true for and . Combining equations from the above two, for any there exists for all ,

,

which proves that . Similarly for , we prove: .


For example, let

.

Since , , and , we have

.

This theorem requires the single limits and to converge. This condition cannot be dropped. For example, consider

.

Then we may see that

,
but does not exist.

This is because does not exist in the first place.

Limit of function

For a two-variable function , there are two other types of limits. One is the ordinary limit, denoted by

,

which means that for all , there exist such that implies . [6]

For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b). In this definition, the point (a, b) is excluded from the paths. Therefore, the value of f at the point (a, b), even if it is defined, does not affect the limit.

The other type is the double limit, denoted by

,

which means that for all , there exist such that and implies . [7]

For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b), except the lines x=a and y=b. In other words, the value of f along the lines x=a and y=b does not affect the limit. This is different from the ordinary limit where only the point (a, b) is excluded. In this sense, ordinary limit is a stronger notion than double limit:

Theorem 2. If exists and equals L, then exists and equals L, i.e.,
.

Both of these limits do not involve first taking one limit and then another. This contrasts with iterated limits where the limiting process is taken in x-direction first, and then in y-direction (or in reversed order).

The following theorem states the relationship between double limit and iterated limits:

Theorem 3. If exists and equals L, exists for each y near b, and exists for each x near a, then and also exist, and they equal L, i.e.,
.

For example, let

.

Since , and , we have

.

(Note that in this example, does not exist.)

This theorem requires the single limits and to exist. This condition cannot be dropped. For example, consider

.

Then we may see that

,
but does not exist.

This is because does not exist for x near 0 in the first place.

Combining Theorem 2 and 3, we have the following corollary:

Corollary 3.1. If exists and equals L, exists for each y near b, and exists for each x near a, then and also exist, and they equal L, i.e.,
.

Limit at infinity of function

For a two-variable function , we may also define the double limit at infinity

,

which means that for all , there exist such that and implies .

Similar definitions may be given for limits at negative infinity.

The following theorem states the relationship between double limit at infinity and iterated limits at infinity:

Theorem 4. If exists and equals L, exists for each large y, and exists for each large x, then and also exist, and they equal L, i.e.,
.

For example, let

.

Since , and , we have

.

Again, this theorem requires the single limits and to exist. This condition cannot be dropped. For example, consider

.

Then we may see that

,
but does not exist.

This is because does not exist for fixed y in the first place.

Invalid converses of the theorems

The converses of Theorems 1, 3 and 4 do not hold, i.e., the existence of iterated limits, even if they are equal, does not imply the existence of the double limit. A counter-example is

near the point (0, 0). On one hand,

.

On the other hand, the double limit does not exist. This can be seen by taking the limit along the path (x, y) = (t, t) → (0,0), which gives

,

and along the path (x, y) = (t, t2) → (0,0), which gives

.

Moore-Osgood theorem for interchanging limits

In the examples above, we may see that interchanging limits may or may not give the same result. A sufficient condition for interchanging limits is given by the Moore-Osgood theorem. [8] The essence of the interchangeability depends on uniform convergence.

Interchanging limits of sequences

The following theorem allows us to interchange two limits of sequences.

Theorem 5. If uniformly (in m), and for each large n, then both and exists and are equal to the double limit, i.e.,
. [3]
Proof. By the uniform convergence, for any there exist such that for all , implies .
As , we have , which means that is a Cauchy sequence which converges to a limit . In addition, as , we have .
On the other hand, if we take first, we have .
By the pointwise convergence, for any and , there exist such that implies .
Then for that fixed , implies .
This proves that .
Also, by taking , we see that this limit also equals .

A corollary is about the interchangeability of infinite sum.

Corollary 5.1. If converges uniformly (in m), and converges for each large n, then .
Proof. Direct application of Theorem 5 on .

Interchanging limits of functions

Similar results hold for multivariable functions.

Theorem 6. If uniformly (in y) on , and for each x near a, then both and exists and are equal to the double limit, i.e.,
. [9]
The a and b here can possibly be infinity.
Proof. By the existence uniform limit, for any there exist such that for all , and implies .
As , we have . By Cauchy criterion, exists and equals a number . In addition, as , we have .
On the other hand, if we take first, we have .
By the existence of pointwise limit, for any and near , there exist such that implies .
Then for that fixed , implies .
This proves that .
Also, by taking , we see that this limit also equals .

Note that this theorem does not imply the existence of . A counter-example is near (0,0). [10]

Interchanging limits of sequences of functions

An important variation of Moore-Osgood theorem is specifically for sequences of functions.

Theorem 7. If uniformly (in x) on , and for each large n, then both and exists and are equal, i.e.,
. [11]
The a here can possibly be infinity.
Proof. By the uniform convergence, for any there exist such that for all , implies .
As , we have , which means that is a Cauchy sequence which converges to a limit . In addition, as , we have .
On the other hand, if we take first, we have .
By the existence of pointwise limit, for any and , there exist such that implies .
Then for that fixed , implies .
This proves that .

A corollary is the continuity theorem for uniform convergence as follows:

Corollary 7.1. If uniformly (in x) on , and are continuous at , then is also continuous at .
In other words, the uniform limit of continuous functions is continuous.
Proof. By Theorem 7, .

Another corollary is about the interchangeability of limit and infinite sum.

Corollary 7.2. If converges uniformly (in x) on , and exists for each large n, then .
Proof. Direct application of Theorem 7 on near .

Applications

Sum of infinite entries in a matrix

Consider a matrix of infinite entries

.

Suppose we would like to find the sum of all entries. If we sum it column by column first, we will find that the first column gives 1, while all others give 0. Hence the sum of all columns is 1. However, if we sum it row by row first, it will find that all rows give 0. Hence the sum of all rows is 0.

The explanation for this paradox is that the vertical sum to infinity and horizontal sum to infinity are two limiting processes that cannot be interchanged. Let be the sum of entries up to entries (n, m). Then we have , but . In this case, the double limit does not exist, and thus this problem is not well-defined.

Integration over unbounded interval

By the integration theorem for uniform convergence, once we have converges uniformly on , the limit in n and an integration over a bounded interval can be interchanged:

.

However, such a property may fail for an improper integral over an unbounded interval . In this case, one may rely on the Moore-Osgood theorem.

Consider as an example.

We first expand the integrand as for . (Here x=0 is a limiting case.)

One can prove by calculus that for and , we have . By Weierstrass M-test, converges uniformly on .

Then by the integration theorem for uniform convergence, .

To further interchange the limit with the infinite summation , the Moore-Osgood theorem requires the infinite series to be uniformly convergent.

Note that . Again, by Weierstrass M-test, converges uniformly on .

Then by the Moore-Osgood theorem, . (Here is the Riemann zeta function.)

See also

Notes

  1. One should pay attention to the fact
    But this is a minor problem since we will soon take the limit .
  2. One should pay attention to the fact
    .
    But this is a minor problem since we will soon take the limit .
  3. 1 2 Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity". Mathematical Anaylysis, Volume I. p. 223. ISBN   9781617386473.
  4. Habil, Eissa (2005). "Double Sequences and Double Series" . Retrieved 2022-10-28.
  5. Apostol, Tom M. (2002). "Infinite Series and Infinite Products". Mathematical Analysis (2nd ed.). Narosa. pp. 199–200. ISBN   978-8185015668.
  6. Stewart, James (2020). "Chapter 14.2 Limits and Continuity". Multivariable Calculus (9th ed.). pp. 952–953. ISBN   9780357042922.
  7. Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity". Mathematical Anaylysis, Volume I. pp. 219–220. ISBN   9781617386473.
  8. Taylor, Angus E. (2012). General Theory of Functions and Integration. Dover Books on Mathematics Series. pp. 139–140. ISBN   9780486152141.
  9. Kadelburg, Zoran (2005). "Interchanging Two Limits" . Retrieved 2022-10-29.
  10. Gelbaum, Bearnard; Olmsted, John (2003). "Chapter 9. Functions of Two Variables.". Counterexamples in Analysis. pp. 118–119. ISBN   0486428753.
  11. Loring, Terry. "The Moore-Osgood Theorem on Exchanging Limits" (PDF). Retrieved 2022-10-28.

Related Research Articles

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Uniform convergence</span> Mode of convergence of a function sequence

In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence, in the sense that the convergence is uniform over the domain. A sequence of functions converges uniformly to a limiting function on a set as the function domain if, given any arbitrarily small positive number , a number can be found such that each of the functions differs from by no more than at every pointin. Described in an informal way, if converges to uniformly, then how quickly the functions approach is "uniform" throughout in the following sense: in order to guarantee that differs from by less than a chosen distance , we only need to make sure that is larger than or equal to a certain , which we can find without knowing the value of in advance. In other words, there exists a number that could depend on but is independent of , such that choosing will ensure that for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find such that, for that particular, falls within of whenever .

In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.

In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function.

<span class="mw-page-title-main">Limit of a sequence</span> Value to which tends an infinite sequence

In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol. If such a limit exists, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided die rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted

In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time.

In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following:

In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).

In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency domain expressions to the time domain behavior as time approaches infinity. Mathematically, if in continuous time has (unilateral) Laplace transform , then a final value theorem establishes conditions under which

In probability theory, Lindeberg's condition is a sufficient condition for the central limit theorem (CLT) to hold for a sequence of independent random variables. Unlike the classical CLT, which requires that the random variables in question have finite variance and be both independent and identically distributed, Lindeberg's CLT only requires that they have finite variance, satisfy Lindeberg's condition, and be independent. It is named after the Finnish mathematician Jarl Waldemar Lindeberg.

In mathematics, a limit is the value that a function approaches as the input approaches some value. Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.

In functional analysis, the Fréchet–Kolmogorov theorem gives a necessary and sufficient condition for a set of functions to be relatively compact in an Lp space. It can be thought of as an Lp version of the Arzelà–Ascoli theorem, from which it can be deduced. The theorem is named after Maurice René Fréchet and Andrey Kolmogorov.

In optics, the Ewald–Oseen extinction theorem, sometimes referred to as just the extinction theorem, is a theorem that underlies the common understanding of scattering. It is named after Paul Peter Ewald and Carl Wilhelm Oseen, who proved the theorem in crystalline and isotropic media, respectively, in 1916 and 1915. Originally, the theorem applied to scattering by an isotropic dielectric objects in free space. The scope of the theorem was greatly extended to encompass a wide variety of bianisotropic media.