Limit comparison test

Last updated

In mathematics, the limit comparison test (LCT) (in contrast with the related direct comparison test) is a method of testing for the convergence of an infinite series.

Contents

Statement

Suppose that we have two series and with for all . Then if with , then either both series converge or both series diverge. [1]

Proof

Because we know that for every there is a positive integer such that for all we have that , or equivalently

As we can choose to be sufficiently small such that is positive. So and by the direct comparison test, if converges then so does .

Similarly , so if diverges, again by the direct comparison test, so does .

That is, both series converge or both series diverge.

Example

We want to determine if the series converges. For this we compare it with the convergent series

As we have that the original series also converges.

One-sided version

One can state a one-sided comparison test by using limit superior. Let for all . Then if with and converges, necessarily converges.

Example

Let and for all natural numbers . Now does not exist, so we cannot apply the standard comparison test. However, and since converges, the one-sided comparison test implies that converges.

Converse of the one-sided comparison test

Let for all . If diverges and converges, then necessarily , that is, . The essential content here is that in some sense the numbers are larger than the numbers .

Example

Let be analytic in the unit disc and have image of finite area. By Parseval's formula the area of the image of is proportional to . Moreover, diverges. Therefore, by the converse of the comparison test, we have , that is, .

See also

Related Research Articles

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

<span class="mw-page-title-main">Limit inferior and limit superior</span> Bounds of a sequence

In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting bounds on the sequence. They can be thought of in a similar fashion for a function. For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, limit infimum, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

In mathematics, the limit of a sequence of sets is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual.

In mathematics, a Dirichlet series is any series of the form

In mathematics, the ratio test is a test for the convergence of a series

<span class="mw-page-title-main">Integral test for convergence</span> Test for infinite series of monotonous terms for convergence

In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.

In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.

In mathematics, the root test is a criterion for the convergence of an infinite series. It depends on the quantity

In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted

In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time.

In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series .

Convergence in measure is either of two distinct mathematical concepts both of which generalize the concept of convergence in probability.

In mathematics, the Cauchy–Hadamard theorem is a result in complex analysis named after the French mathematicians Augustin Louis Cauchy and Jacques Hadamard, describing the radius of convergence of a power series. It was published in 1821 by Cauchy, but remained relatively unknown until Hadamard rediscovered it. Hadamard's first publication of this result was in 1888; he also included it as part of his 1892 Ph.D. thesis.

In mathematics, specifically in number theory, the extremal orders of an arithmetic function are best possible bounds of the given arithmetic function. Specifically, if f(n) is an arithmetic function and m(n) is a non-decreasing function that is ultimately positive and

In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of

In mathematics, a càdlàg, RCLL, or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere. Càdlàg functions are important in the study of stochastic processes that admit jumps, unlike Brownian motion, which has continuous sample paths. The collection of càdlàg functions on a given domain is known as Skorokhod space.

In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.

References

  1. Swokowski, Earl (1983), Calculus with analytic geometry (Alternate ed.), Prindle, Weber & Schmidt, p.  516, ISBN   0-87150-341-7

Further reading