Root test

Last updated

In mathematics, the root test is a criterion for the convergence (a convergence test) of an infinite series. It depends on the quantity

Contents

where are the terms of the series, and states that the series converges absolutely if this quantity is less than one, but diverges if it is greater than one. It is particularly useful in connection with power series.

Root test explanation

Decision diagram for the root test Decision diagram for the root test.svg
Decision diagram for the root test

The root test was developed first by Augustin-Louis Cauchy who published it in his textbook Cours d'analyse (1821). [1] Thus, it is sometimes known as the Cauchy root test or Cauchy's radical test. For a series

the root test uses the number

where "lim sup" denotes the limit superior, possibly +∞. Note that if

converges then it equals C and may be used in the root test instead.

The root test states that:

There are some series for which C = 1 and the series converges, e.g. , and there are others for which C = 1 and the series diverges, e.g. .

Application to power series

This test can be used with a power series

where the coefficients cn, and the center p are complex numbers and the argument z is a complex variable.

The terms of this series would then be given by an = cn(zp)n. One then applies the root test to the an as above. Note that sometimes a series like this is called a power series "around p", because the radius of convergence is the radius R of the largest interval or disc centred at p such that the series will converge for all points z strictly in the interior (convergence on the boundary of the interval or disc generally has to be checked separately).

A corollary of the root test applied to a power series is the Cauchy–Hadamard theorem: the radius of convergence is exactly taking care that we really mean ∞ if the denominator is 0.

Proof

The proof of the convergence of a series Σan is an application of the comparison test.

If for all nN (N some fixed natural number) we have , then . Since the geometric series converges so does by the comparison test. Hence Σan converges absolutely.

If for infinitely many n, then an fails to converge to 0, hence the series is divergent.

Proof of corollary: For a power series Σan = Σcn(z  p)n, we see by the above that the series converges if there exists an N such that for all nN we have

equivalent to

for all nN, which implies that in order for the series to converge we must have for all sufficiently large n. This is equivalent to saying

so Now the only other place where convergence is possible is when

(since points > 1 will diverge) and this will not change the radius of convergence since these are just the points lying on the boundary of the interval or disc, so

Examples

Example 1:

Applying the root test and using the fact that

Since the series diverges. [2]

Example 2:

The root test shows convergence because

This example shows how the root test is stronger than the ratio test. The ratio test is inconclusive for this series if is odd so (though not if is even), because

Root tests hierarchy

Root tests hierarchy [3] [4] is built similarly to the ratio tests hierarchy (see Section 4.1 of ratio test, and more specifically Subsection 4.1.4 there).

For a series with positive terms we have the following tests for convergence/divergence.

Let be an integer, and let denote the th iterate of natural logarithm, i.e. and for any , .

Suppose that , when is large, can be presented in the form

(The empty sum is assumed to be 0.)

Proof

Since , then we have

From this,

From Taylor's expansion applied to the right-hand side, we obtain:

Hence,

(The empty product is set to 1.)

The final result follows from the integral test for convergence.

See also

Related Research Articles

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

<span class="mw-page-title-main">Natural logarithm</span> Logarithm to the base of the mathematical constant e

The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.

<span class="mw-page-title-main">Laurent series</span> Power series with negative powers

In mathematics, the Laurent series of a complex function is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Karl Weierstrass may have discovered it first in a paper written in 1841, but it was not published until after his death.

In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function, the radius of convergence is the shortest or minimum of all the respective distances calculated from the center of the disk of convergence to the respective singularities of the function.

<span class="mw-page-title-main">Harmonic number</span> Sum of the first n whole number reciprocals; 1/1 + 1/2 + 1/3 + ... + 1/n

In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:

In mathematics, the limit of a sequence of sets is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual.

In mathematics, the ratio test is a test for the convergence of a series

In mathematics, the exponential function can be characterized in many ways. The following characterizations (definitions) are most common. This article discusses why each characterization makes sense, and why they are all equivalent to each other. As a special case of these considerations, it will be demonstrated that the three most common definitions for the mathematical constant e are equivalent to each other.

In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted

In mathematics, the Glaisher–Kinkelin constant or Glaisher's constant, typically denoted A, is a mathematical constant, related to the K-function and the Barnes G-function. The constant appears in a number of sums and integrals, especially those involving gamma functions and zeta functions. It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin.

<span class="mw-page-title-main">Empirical distribution function</span> Distribution function associated with the empirical measure of a sample

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time.

In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series .

In mathematics, the limit comparison test (LCT) (in contrast with the related direct comparison test) is a method of testing for the convergence of an infinite series.

In mathematics, the Cauchy–Hadamard theorem is a result in complex analysis named after the French mathematicians Augustin Louis Cauchy and Jacques Hadamard, describing the radius of convergence of a power series. It was published in 1821 by Cauchy, but remained relatively unknown until Hadamard rediscovered it. Hadamard's first publication of this result was in 1888; he also included it as part of his 1892 Ph.D. thesis.

In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of

In mathematics, for a sequence of complex numbers a1, a2, a3, ... the infinite product

References

  1. Bottazzini, Umberto (1986), The Higher Calculus: A History of Real and Complex Analysis from Euler to Weierstrass, Springer-Verlag, pp.  116–117, ISBN   978-0-387-96302-0 . Translated from the Italian by Warren Van Egmond.
  2. Briggs, William; Cochrane, Lyle (2011). Calculus: Early Transcendentals . Addison Wesley. p. 571.
  3. Abramov, Vyacheslav M. (2022). "Necessary and sufficient conditions for the convergence of positive series" (PDF). Journal of Classical Analysis. 19 (2): 117--125. arXiv: 2104.01702 . doi:10.7153/jca-2022-19-09.
  4. Bourchtein, Ludmila; Bourchtein, Andrei; Nornberg, Gabrielle; Venzke, Cristiane (2012). "A hierarchy of convergence tests related to Cauchy's test" (PDF). International Journal of Mathematical Analysis. 6 (37--40): 1847--1869.

This article incorporates material from Proof of Cauchy's root test on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.