Sophomore's dream

Last updated

In mathematics, the sophomore's dream is the pair of identities (especially the first)

Contents

discovered in 1697 by Johann Bernoulli.

The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively.

The name "sophomore's dream" [1] is in contrast to the name "freshman's dream" which is given to the incorrect [note 1] identity . The sophomore's dream has a similar too-good-to-be-true feel, but is true.

Proof

Graph of the functions y = x (red, lower) and y = x (grey, upper) on the interval x [?] (0, 1]. Sophdream.png
Graph of the functions y = x (red, lower) and y = x (grey, upper) on the interval x  (0, 1].

The proofs of the two identities are completely analogous, so only the proof of the second is presented here. The key ingredients of the proof are:

In details, xx can be expanded as

Therefore,

By uniform convergence of the power series, one may interchange summation and integration to yield

To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity

By Euler's integral identity for the Gamma function, one has

so that

Summing these (and changing indexing so it starts at n= 1 instead of n = 0) yields the formula.

Historical proof

The original proof, given in Bernoulli, [2] and presented in modernized form in Dunham, [3] differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms.

The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing the definite integral.

Integrating by substituting and yields:

(also in the list of integrals of logarithmic functions). This reduces the power on the logarithm in the integrand by 1 (from to ) and thus one can compute the integral inductively, as

where denotes the falling factorial; there is a finite sum because the induction stops at 0, since n is an integer.

In this case , and they are integers, so

Integrating from 0 to 1, all the terms vanish except the last term at 1, [note 2] which yields:

This is equivalent to computing Euler's integral identity for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts.

See also

Notes

  1. Incorrect in general, but correct when one is working in a commutative ring of prime characteristic p with n being a power of p. The correct result in a general commutative context is given by the binomial theorem.
  2. All the terms vanish at 0 because by l'Hôpital's rule (Bernoulli omitted this technicality), and all but the last term vanish at 1 since log 1 = 0.

Related Research Articles

In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.

<span class="mw-page-title-main">Gamma function</span> Extension of the factorial function

In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer n,

<span class="mw-page-title-main">Natural logarithm</span> Logarithm to the base of the mathematical constant e

The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.

Lambert <i>W</i> function Multivalued function in mathematics

In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

<span class="mw-page-title-main">Harmonic number</span> Sum of the first n whole number reciprocals; 1/1 + 1/2 + 1/3 + ... + 1/n

In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:

<span class="mw-page-title-main">Bernoulli polynomials</span> Polynomial sequence

In mathematics, the Bernoulli polynomials, named after Jacob Bernoulli, combine the Bernoulli numbers and binomial coefficients. They are used for series expansion of functions, and with the Euler–MacLaurin formula.

Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

<span class="mw-page-title-main">Improper integral</span> Concept in mathematical analysis

In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals, this typically involves unboundedness, either of the set over which the integral is taken or of the integrand, or both. It may also involve bounded but not closed sets or bounded but not continuous functions. While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge. If a regular definite integral is worked out as if it is improper, the same answer will result.

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

<span class="mw-page-title-main">Polylogarithm</span> Special mathematical function

In mathematics, the polylogarithm (also known as Jonquière's function, for Alfred Jonquière) is a special function Lis(z) of order s and argument z. Only for special values of s does the polylogarithm reduce to an elementary function such as the natural logarithm or a rational function. In quantum statistics, the polylogarithm function appears as the closed form of integrals of the Fermi–Dirac distribution and the Bose–Einstein distribution, and is also known as the Fermi–Dirac integral or the Bose–Einstein integral. In quantum electrodynamics, polylogarithms of positive integer order arise in the calculation of processes represented by higher-order Feynman diagrams.

<span class="mw-page-title-main">Gaussian integral</span> Integral of the Gaussian function, equal to sqrt(π)

The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is

In mathematics, the exponential function can be characterized in many ways. The following characterizations (definitions) are most common. This article discusses why each characterization makes sense, and why the characterizations are independent of and equivalent to each other. As a special case of these considerations, it will be demonstrated that the three most common definitions given for the mathematical constant e are equivalent to each other.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field.

In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the infinite-dimensional case of a linear operator S mapping a function space V to itself. The corresponding quantity det(S) is called the functional determinant of S.

Ramanujan summation is a technique invented by the mathematician Srinivasa Ramanujan for assigning a value to divergent infinite series. Although the Ramanujan summation of a divergent series is not a sum in the traditional sense, it has properties that make it mathematically useful in the study of divergent infinite series, for which conventional summation is undefined.

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

<span class="mw-page-title-main">Ramanujan's master theorem</span> Mathematical theorem

In mathematics, Ramanujan's Master Theorem, named after Srinivasa Ramanujan, is a technique that provides an analytic expression for the Mellin transform of an analytic function.

References

Formula

  • Bernoulli, Johann (1697). Opera omnia. Vol. 3. pp. 376–381.
  • Borwein, Jonathan; Bailey, David H.; Girgensohn, Roland (2004). Experimentation in Mathematics: Computational Paths to Discovery. pp. 4, 44. ISBN   9781568811369.
  • Dunham, William (2005). "Chapter 3: The Bernoullis (Johann and )". The Calculus Gallery, Masterpieces from Newton to Lebesgue. Princeton University Press. pp. 46–51. ISBN   9780691095653.
  • OEIS, (sequence A083648 in the OEIS) and (sequence A073009 in the OEIS)
  • Pólya, George; Szegő, Gábor (1998), "Part I, problem 160", Problems and Theorems in Analysis, p. 36, ISBN   9783540636403
  • Weisstein, Eric W. "Sophomore's Dream". MathWorld .
  • Max R. P. Grossmann (2017): Sophomore's dream. 1,000,000 digits of the first constant

Function

Footnotes