Series multisection

Last updated

In mathematics, a multisection of a power series is a new power series composed of equally spaced terms extracted unaltered from the original series. Formally, if one is given a power series

Contents

then its multisection is a power series of the form

where p, q are integers, with 0 ≤ p < q. Series multisection represents one of the common transformations of generating functions.

Multisection of analytic functions

A multisection of the series of an analytic function

has a closed-form expression in terms of the function :

where is a primitive q-th root of unity. This solution was first discovered by Thomas Simpson. [1] This expression is especially useful in that it can convert an infinite sum into a finite sum. It is used, for example, in a key step of a standard proof of Gauss's digamma theorem, which gives a closed-form solution to the digamma function evaluated at rational values p/q.

Examples

Bisection

In general, the bisections of a series are the even and odd parts of the series.

Geometric series

Consider the geometric series

By setting in the above series, its multisections are easily seen to be

Remembering that the sum of the multisections must equal the original series, we recover the familiar identity

Exponential function

The exponential function

by means of the above formula for analytic functions separates into

The bisections are trivially the hyperbolic functions:

Higher order multisections are found by noting that all such series must be real-valued along the real line. By taking the real part and using standard trigonometric identities, the formulas may be written in explicitly real form as

These can be seen as solutions to the linear differential equation with boundary conditions , using Kronecker delta notation. In particular, the trisections are

and the quadrisections are

Binomial series

Multisection of a binomial expansion

at x = 1 gives the following identity for the sum of binomial coefficients with step q:

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function f(z) has a root at w, then f(z) / , taking the limit value at w, is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

<span class="mw-page-title-main">Hyperbolic functions</span> Collective name of 6 mathematical functions

In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and +sinh(t) respectively.

In mathematics, de Moivre's formula states that for any real number x and integer n it holds that

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is a summation of harmonically related sinusoidal functions, also known as components or harmonics. The result of the summation is a periodic function whose functional form is determined by the choices of cycle length, the number of components, and their amplitudes and phase parameters. With appropriate choices, one cycle of the summation can be made to approximate an arbitrary function in that interval. The number of components is theoretically infinite, in which case the other parameters can be chosen to cause the series to converge to almost any well behaved periodic function. The components of a particular function are determined by analysis techniques described in this article. Sometimes the components are known first, and the unknown function is synthesized by a Fourier series. Such is the case of a discrete-time Fourier transform.

Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi (1829). Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.

In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range [0, 2π). A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the lower or upper end of the range, and the division of the range could notionally be made at any point.

In mathematics, Mathieu functions, sometimes called angular Mathieu functions, are solutions of Mathieu's differential equation

In mathematics, particularly q-analog theory, the Ramanujan theta function generalizes the form of the Jacobi theta functions, while capturing their general properties. In particular, the Jacobi triple product takes on a particularly elegant form when written in terms of the Ramanujan theta. The function is named after mathematician Srinivasa Ramanujan.

<span class="mw-page-title-main">Lemniscate elliptic functions</span> Mathematical functions

In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.

<span class="mw-page-title-main">Toroidal coordinates</span>

Toroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci and in bipolar coordinates become a ring of radius in the plane of the toroidal coordinate system; the -axis is the axis of rotation. The focal ring is also known as the reference circle.

<span class="mw-page-title-main">Oblate spheroidal coordinates</span> Three-dimensional orthogonal coordinate system

Oblate spheroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional elliptic coordinate system about the non-focal axis of the ellipse, i.e., the symmetry axis that separates the foci. Thus, the two foci are transformed into a ring of radius in the x-y plane. Oblate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two largest semi-axes are equal in length.

<span class="mw-page-title-main">Sine and cosine</span> Trigonometric functions of an angle

In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle, and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle , the sine and cosine functions are denoted simply as and .

In complex analysis, a partial fraction expansion is a way of writing a meromorphic function as an infinite sum of rational functions and polynomials. When is a rational function, this reduces to the usual method of partial fractions.

References

  1. Simpson, Thomas (1757). "CIII. The invention of a general method for determining the sum of every 2d, 3d, 4th, or 5th, &c. term of a series, taken in order; the sum of the whole series being known". Philosophical Transactions of the Royal Society of London. 51: 757–759. doi: 10.1098/rstl.1757.0104 .