|Part of a series of articles about|
In mathematics, the harmonic series is the divergent infinite series
Its name derives from the concept of overtones, or harmonics in music: the wavelengths of the overtones of a vibrating string are 1/2, 1/3, 1/4, etc., of the string's fundamental wavelength. Every term of the series after the first is the harmonic mean of the neighboring terms; the phrase harmonic mean likewise derives from music.
The divergence of the harmonic series was first proven in the 14th century by Nicole Oresme,but this achievement fell into obscurity. Proofs were given in the 17th century by Pietro Mengoli and by Johann Bernoulli, the latter proof published and popularized by his brother Jacob Bernoulli.
Historically, harmonic sequences have had a certain popularity with architects. This was so particularly in the Baroque period, when architects used them to establish the proportions of floor plans, of elevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces.
There are several well-known proofs of the divergence of the harmonic series. A few of them are given below.
One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largest power of two:
Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than or equal to the sum of the second series. However, the sum of the second series is infinite:
(Here, "" is merely a notational convention to indicate that the partial sums of the series grow without bound.)
It follows (by the comparison test) that the sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that
for every positive integer k.
This proof, proposed by Nicole Oresme in around 1350, is considered by many in the mathematical community[ by whom? ] to be a high point of medieval mathematics. It is still a standard proof taught in mathematics classes today. Cauchy's condensation test is a generalization of this argument.
It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and 1/n units high, so the total area of the infinite number of rectangles is the sum of the harmonic series:
Additionally, the total area under the curve y = 1/x from 1 to infinity is given by a divergent improper integral:
Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. More precisely, the first rectangles completely cover the region underneath the curve for and so
The generalization of this argument is known as the integral test.
The harmonic series diverges very slowly. For example, the sum of the first 1043 terms is less than 100.This is because the partial sums of the series have logarithmic growth. In particular,
where γ is the Euler–Mascheroni constant and εk ~ 1/2k which approaches 0 as k goes to infinity. Leonhard Euler proved both this and also the more striking fact that the sum which includes only the reciprocals of primes also diverges, i.e.
|n||Partial sum of the harmonic series, Hn|
|expressed as a fraction||decimal||relative size|
The finite partial sums of the diverging harmonic series,
are called harmonic numbers.
The difference between Hn and ln n converges to the Euler–Mascheroni constant. The difference between any two harmonic numbers is never an integer. No harmonic numbers are integers, except for H1 = 1. :p. 24 :Thm. 1
is known as the alternating harmonic series. This series converges by the alternating series test. In particular, the sum is equal to the natural logarithm of 2:
The alternating harmonic series, while conditionally convergent, is not absolutely convergent: if the terms in the series are systematically rearranged, in general the sum becomes different and, dependent on the rearrangement, possibly even infinite.
The alternating harmonic series formula is a special case of the Mercator series, the Taylor series for the natural logarithm.
A related series can be derived from the Taylor series for the arctangent:
This is known as the Leibniz series.
The general harmonic series is of the form
where a ≠ 0 and b are real numbers, and b/a is not zero or a negative integer.
By the limit comparison test with the harmonic series, all general harmonic series also diverge.
A generalization of the harmonic series is the p-series (or hyperharmonic series), defined as
for any real number p. When p = 1, the p-series is the harmonic series, which diverges. Either the integral test or the Cauchy condensation test shows that the p-series converges for all p > 1 (in which case it is called the over-harmonic series) and diverges for all p ≤ 1. If p > 1 then the sum of the p-series is ζ(p), i.e., the Riemann zeta function evaluated at p.
The problem of finding the sum for p = 2 is called the Basel problem; Leonhard Euler showed it is π2/6. The value of the sum for p = 3 is called Apéry's constant, since Roger Apéry proved that it is an irrational number.
Related to the p-series is the ln-series, defined as
for any positive real number p. This can be shown by the integral test to diverge for p ≤ 1 but converge for all p > 1.
For any convex, real-valued function φ such that
is convergent.[ citation needed ]
The random harmonic series
where the sn are independent, identically distributed random variables taking the values +1 and −1 with equal probability 1/2, is a well-known example in probability theory for a series of random variables that converges with probability 1. The fact of this convergence is an easy consequence of either the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. Byron Schmuland of the University of Alberta further examined the properties of the random harmonic series, and showed that the convergent series is a random variable with some interesting properties. In particular, the probability density function of this random variable evaluated at +2 or at −2 takes on the value 0.124999999999999999999999999999999999999999764..., differing from 1/8 by less than 10−42. Schmuland's paper explains why this probability is so close to, but not exactly, 1/8. The exact value of this probability is given by the infinite cosine product integral C2 divided by π.
The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge to the value 22.92067661926415034816.... In fact, when all the terms containing any particular string of digits (in any base) are removed, the series converges.
The harmonic series can be counterintuitive to students first encountering it, because it is a divergent series even though the limit of the nth term as n goes to infinity is zero. The divergence of the harmonic series is also the source of some apparent paradoxes. One example of these is the "worm on the rubber band". Suppose that a worm crawls along an infinitely-elastic one-meter rubber band at the same time as the rubber band is uniformly stretched. If the worm travels 1 centimeter per minute and the band stretches 1 meter per minute, will the worm ever reach the end of the rubber band? The answer, counterintuitively, is "yes", for after n minutes, the ratio of the distance travelled by the worm to the total length of the rubber band is
(In fact the actual ratio is a little less than this sum as the band expands continuously.)
Because the series gets arbitrarily large as n becomes larger, eventually this ratio must exceed 1, which implies that the worm reaches the end of the rubber band. However, the value of n at which this occurs must be extremely large: approximately e 100, a number exceeding 1043 minutes (1037 years). Although the harmonic series does diverge, it does so very slowly.
Another problem involving the harmonic series is the Jeep problem, which (in one form) asks how much total fuel is required for a jeep with a limited fuel-carrying capacity to cross a desert, possibly leaving fuel drops along the route. The distance that can be traversed with a given amount of fuel is related to the partial sums of the harmonic series, which grow logarithmically. And so the fuel required increases exponentially with the desired distance.
Another example is the block-stacking problem: given a collection of identical dominoes, it is clearly possible to stack them at the edge of a table so that they hang over the edge of the table without falling. The counterintuitive result is that one can stack them in such a way as to make the overhang arbitrarily large, provided there are enough dominoes.
A simpler example, on the other hand, is the swimmer that keeps adding more speed when touching the walls of the pool. The swimmer starts crossing a 10-meter pool at a speed of 2 m/s, and with every cross, another 2 m/s is added to the speed. In theory, the swimmer's speed is unlimited, but the number of pool crosses needed to get to that speed becomes very large; for instance, to get to the speed of light (ignoring special relativity), the swimmer needs to cross the pool 150 million times. Contrary to this large number, the time required to reach a given speed depends on the sum of the series at any given number of pool crosses (iterations):
Calculating the sum (iteratively) shows that to get to the speed of light the time required is only 97 seconds. By continuing beyond this point (exceeding the speed of light, again ignoring special relativity), the time taken to cross the pool will in fact approach zero as the number of iterations becomes very large, and although the time required to cross the pool appears to tend to zero (at an infinite number of iterations), the sum of iterations (time taken for total pool crosses) will still diverge at a very slow rate.
In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.
The Riemann zeta function or Euler–Riemann zeta function, ζ(s), is a mathematical function of a complex variable s, and can be expressed as:
In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor's series are named after Brook Taylor, who introduced them in 1715.
The Euler–Mascheroni constant is a mathematical constant recurring in analysis and number theory, usually denoted by the lowercase Greek letter gamma.
In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:
The sum of the reciprocals of all prime numbers diverges; that is:
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:
In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.
In mathematics, an alternating series is an infinite series of the form
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted
In mathematics, the Riemann series theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges.
In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted ζ(s) and is named after the mathematician Bernhard Riemann. When the argument s is a real number greater than one, the zeta function satisfies the equation
Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proved by Euclid in his work Elements. There are several proofs of the theorem.
In mathematics, 1 + 2 + 4 + 8 + ⋯ is the infinite series whose terms are the successive powers of two. As a geometric series, it is characterized by its first term, 1, and its common ratio, 2. As a series of real numbers it diverges to infinity, so in the usual sense it has no sum. In a much broader sense, the series is associated with another value besides ∞, namely −1, which is the limit of the series using the 2-adic metric.
In mathematics, 1 − 2 + 3 − 4 + ··· is an infinite series whose terms are the successive positive integers, given alternating signs. Using sigma summation notation the sum of the first m terms of the series can be expressed as
The infinite series whose terms are the natural numbers 1 + 2 + 3 + 4 + ⋯ is a divergent series. The nth partial sum of the series is the triangular number
In mathematics, for a sequence of complex numbers a1, a2, a3, ... the infinite product
Gregory coefficientsGn, also known as reciprocal logarithmic numbers, Bernoulli numbers of the second kind, or Cauchy numbers of the first kind, are the rational numbers that occur in the Maclaurin series expansion of the reciprocal logarithm
|Wikimedia Commons has media related to Harmonic series (mathematics) .|