Alternating series

Last updated

In mathematics, an alternating series is an infinite series of the form

Contents

or

with an > 0 for all n. The signs of the general terms alternate between positive and negative. Like any series, an alternating series converges if and only if the associated sequence of partial sums converges.

Examples

The geometric series 1/21/4 + 1/81/16 + ⋯ sums to 1/3.

The alternating harmonic series has a finite sum but the harmonic series does not.

The Mercator series provides an analytic expression of the natural logarithm:

The functions sine and cosine used in trigonometry can be defined as alternating series in calculus even though they are introduced in elementary algebra as the ratio of sides of a right triangle. In fact,

and

When the alternating factor (–1)n is removed from these series one obtains the hyperbolic functions sinh and cosh used in calculus.

For integer or positive index α the Bessel function of the first kind may be defined with the alternating series

where Γ(z) is the gamma function.

If s is a complex number, the Dirichlet eta function is formed as an alternating series

that is used in analytic number theory.

Alternating series test

The theorem known as "Leibniz Test" or the alternating series test tells us that an alternating series will converge if the terms an converge to 0 monotonically.

Proof: Suppose the sequence converges to zero and is monotone decreasing. If is odd and , we obtain the estimate via the following calculation:

Since is monotonically decreasing, the terms are negative. Thus, we have the final inequality: . Similarly, it can be shown that . Since converges to , our partial sums form a Cauchy sequence (i.e., the series satisfies the Cauchy criterion) and therefore converge. The argument for even is similar.

Approximating sums

The estimate above does not depend on . So, if is approaching 0 monotonically, the estimate provides an error bound for approximating infinite sums by partial sums:

That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through is sufficient. This series happens to have the property that constructing a new series with also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound, [1] discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by Johnsonbaugh error bound. [2] If one can apply the property an infinite number of times, Euler's transform applies. [3]

Absolute convergence

A series converges absolutely if the series converges.

Theorem: Absolutely convergent series are convergent.

Proof: Suppose is absolutely convergent. Then, is convergent and it follows that converges as well. Since , the series converges by the comparison test. Therefore, the series converges as the difference of two convergent series .

Conditional convergence

A series is conditionally convergent if it converges but does not converge absolutely.

For example, the harmonic series

diverges, while the alternating version

converges by the alternating series test.

Rearrangements

For any series, we can create a new series by rearranging the order of summation. A series is unconditionally convergent if any rearrangement creates a series with the same convergence as the original series. Absolutely convergent series are unconditionally convergent. But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence. [4] The general principle is that addition of infinite sums is only commutative for absolutely convergent series.

For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.

As another example, by Mercator series

But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for :

Series acceleration

In practice, the numerical summation of an alternating series may be sped up using any one of a variety of series acceleration techniques. One of the oldest techniques is that of Euler summation, and there are many modern techniques that can offer even more rapid convergence.

See also

Notes

  1. Calabrese, Philip (March 1962). "A Note on Alternating Series". The American Mathematical Monthly. 69 (3): 215–217. doi:10.2307/2311056. JSTOR   2311056.
  2. Johnsonbaugh, Richard (October 1979). "Summing an Alternating Series". The American Mathematical Monthly. 86 (8): 637–648. doi:10.2307/2321292. JSTOR   2321292.
  3. Villarino, Mark B. (2015-11-27). "The error in an alternating series". arXiv: 1511.08568 [math.CA].
  4. Mallik, AK (2007). "Curious Consequences of Simple Sequences". Resonance. 12 (1): 23–37. doi:10.1007/s12045-007-0004-7. S2CID   122327461.

Related Research Articles

In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.

In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

<span class="mw-page-title-main">Taylor series</span> Mathematical approximation of a function

In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.

In mathematics, a power series is an infinite series of the form

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions:

<span class="mw-page-title-main">Stirling's approximation</span> Approximation for factorials

In mathematics, Stirling's approximation is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre.

<span class="mw-page-title-main">Divergence of the sum of the reciprocals of the primes</span> Theorem

The sum of the reciprocals of all prime numbers diverges; that is:

In mathematics, the Gibbs phenomenon is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The th partial Fourier series of the function produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere except points of discontinuity.

In mathematics, more specifically in mathematical analysis, the Cauchy product is the discrete convolution of two infinite series. It is named after the French mathematician Augustin-Louis Cauchy.

In mathematics, the Leibniz formula for π, named after Gottfried Wilhelm Leibniz, states that

In mathematics, the root test is a criterion for the convergence of an infinite series. It depends on the quantity

In mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion. The test is only sufficient, not necessary, so some convergent alternating series may fail the first part of the test.

In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted

In mathematics, the Riemann series theorem, also called the Riemann rearrangement theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent.

In mathematics, the Cauchy condensation test, named after Augustin-Louis Cauchy, is a standard convergence test for infinite series. For a non-increasing sequence of non-negative real numbers, the series converges if and only if the "condensed" series converges. Moreover, if they converge, the sum of the condensed series is no more than twice as large as the sum of the original.

<span class="mw-page-title-main">Mercator series</span> Taylor series for the natural logarithm

In mathematics, the Mercator series or Newton–Mercator series is the Taylor series for the natural logarithm:

In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series .

In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of

In mathematics, for a sequence of complex numbers a1, a2, a3, ... the infinite product

References