Poisson summation formula

Last updated

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

Contents

Forms of the equation

Consider an aperiodic function with Fourier transform alternatively designated by and

The basic Poisson summation formula is:

Also consider periodic functions, where parameters and are in the same units as :

Then Eq.1 is a special case (P=1, x=0) of this generalization: [1] [2]

which is a Fourier series expansion with coefficients that are samples of the function Similarly:

also known as the important Discrete-time Fourier transform .

Derivations

A proof may be found in either Pinsky [1] or Zygmund. [2] Eq.2 , for instance, holds in the sense that if , then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. It follows from the dominated convergence theorem that exists and is finite for almost every . Furthermore it follows that is integrable on any interval of length So it is sufficient to show that the Fourier series coefficients of are Proceeding from the definition of the Fourier coefficients we have:

where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables () this becomes:

Distributional formulation

These equations can be interpreted in the language of distributions [3] [4] :§7.2 for a function whose derivatives are all rapidly decreasing (see Schwartz function). The Poisson summation formula arises as a particular case of the Convolution Theorem on tempered distributions, using the Dirac comb distribution and its Fourier series:

In other words, the periodization of a Dirac delta resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments.

For the case Eq.1 readily follows:

Similarly:

Or: [5] :143

The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as [6]

Applicability

Eq.2 holds provided is a continuous integrable function which satisfies

for some and every [7] [8] Note that such is uniformly continuous, this together with the decay assumption on , show that the series defining converges uniformly to a continuous function.   Eq.2 holds in the strong sense that both sides converge uniformly and absolutely to the same limit. [8]

Eq.2 holds in a pointwise sense under the strictly weaker assumption that has bounded variation and [2]

The Fourier series on the right-hand side of Eq.2 is then understood as a (conditionally convergent) limit of symmetric partial sums.

As shown above, Eq.2 holds under the much less restrictive assumption that is in , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of [2] In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2 , case holds under the less restrictive conditions that is integrable and 0 is a point of continuity of . However Eq.2 may fail to hold even when both and are integrable and continuous, and the sums converge absolutely. [9]

Applications

Method of images

In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions. [7] In one dimension, the resulting solution is called a theta function.

In electrodynamics, the method is also used to accelerate the computation of periodic Green's functions. [10]

Sampling

In the statistical study of time-series, if is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function is band-limited, meaning that there is some cutoff frequency such that is zero for frequencies exceeding the cutoff: for For band-limited functions, choosing the sampling rate guarantees that no information is lost: since can be reconstructed from these sampled values. Then, by Fourier inversion, so can This leads to the Nyquist–Shannon sampling theorem. [1]

Ewald summation

Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. [11] (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.

Approximations of integrals

The Poisson summation formula is also useful to bound the errors obtained when an integral is approximated by a (Riemann) sum. Consider an approximation of as , where is the size of the bin. Then, according to Eq.2 this approximation coincides with . The error in the approximation can then be bounded as . This is particularly useful when the Fourier transform of is rapidly decaying if .

Lattice points inside a sphere

The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points inside a large Euclidean sphere. It can also be used to show that if an integrable function, and both have compact support then [1]

Number theory

In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function. [12]

One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians . Put , for a complex number in the upper half plane, and define the theta function:

The relation between and turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing and using the fact that one can conclude:

by putting

It follows from this that has a simple transformation property under and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares.

Sphere packings

Cohn & Elkies [13] proved an upper bound on the density of sphere packings using the Poisson summation formula, which subsequently led to a proof of optimal sphere packings in dimension 8 and 24.

Other

Generalizations

The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let be the lattice in consisting of points with integer coordinates. For a function in , consider the series given by summing the translates of by elements of :

Theorem For in , the above series converges pointwise almost everywhere, and thus defines a periodic function on   lies in with
Moreover, for all in   (Fourier transform on ) equals (Fourier transform on ).

When is in addition continuous, and both and decay sufficiently fast at infinity, then one can "invert" the domain back to and make a stronger statement. More precisely, if

for some C, δ > 0, then [8] :VII §2

where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives Eq.1 above.

More generally, a version of the statement holds if Λ is replaced by a more general lattice in . The dual lattice Λ′ can be defined as a subset of the dual vector space or alternatively by Pontryagin duality. Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization.

This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.

Selberg trace formula

Further generalization to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.

A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups with a discrete subgroup such that has finite volume. For example, can be the real points of and can be the integral points of . In this setting, plays the role of the real number line in the classical version of Poisson summation, and plays the role of the integers that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of Eq.1 becomes a sum over irreducible unitary representations of , and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of , and is called "the geometric side."

The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory.

Convolution theorem

The Poisson summation formula is a particular case of the convolution theorem on tempered distributions. If one of the two factors is the Dirac comb, one obtains periodic summation on one side and sampling on the other side of the equation. Applied to the Dirac delta function and its Fourier transform, the function that is constantly 1, this yields the Dirac comb identity.

See also

      Related Research Articles

      In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.

      <span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

      In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

      <span class="mw-page-title-main">Fourier analysis</span> Branch of mathematics

      In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

      <span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

      In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

      <span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

      In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes as input a function and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

      In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions is the pointwise product of their Fourier transforms. More generally, convolution in one domain equals point-wise multiplication in the other domain. Other versions of the convolution theorem are applicable to various Fourier-related transforms.

      <span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

      A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

      <span class="mw-page-title-main">Heaviside step function</span> Indicator function of positive numbers

      The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for nonnegative arguments. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

      <span class="mw-page-title-main">Spectral density</span> Relative importance of certain frequencies in a composite signal

      In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.

      <span class="mw-page-title-main">Short-time Fourier transform</span> Fourier-related transform suited to signals that change rather quickly in time

      The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.

      In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

      In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.

      Eisenstein series, named after German mathematician Gotthold Eisenstein, are particular modular forms with infinite series expansions that may be written down directly. Originally defined for the modular group, Eisenstein series can be generalized in the theory of automorphic forms.

      <span class="mw-page-title-main">Dirac comb</span> Periodic distribution ("function") of "point-mass" Dirac delta sampling

      In mathematics, a Dirac comb is a periodic function with the formula

      <span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

      In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

      <span class="mw-page-title-main">Wavelet transform</span> Mathematical technique used in data compression and analysis

      In mathematics, a wavelet series is a representation of a square-integrable function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.

      <span class="mw-page-title-main">Wigner distribution function</span>

      The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.

      In actuarial science and applied probability, ruin theory uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.

      In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

      <span class="mw-page-title-main">Periodic summation</span> Sum of a functions values every _P_ offsets

      In mathematics, any integrable function can be made into a periodic function with period P by summing the translations of the function by integer multiples of P. This is called periodic summation:

      References

      1. 1 2 3 4 Pinsky, M. (2002), Introduction to Fourier Analysis and Wavelets., Brooks Cole, ISBN   978-0-534-37660-4
      2. 1 2 3 4 Zygmund, Antoni (1968), Trigonometric Series (2nd ed.), Cambridge University Press (published 1988), ISBN   978-0-521-35885-9
      3. Córdoba, A., "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376
      4. Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN   3-540-12104-8, MR   0717035
      5. Oppenheim, Alan V.; Schafer, Ronald W.; Buck, John R. (1999). Discrete-time signal processing (2nd ed.). Upper Saddle River, N.J.: Prentice Hall. ISBN   0-13-754920-2. samples of the Fourier transform of an aperiodic sequence x[n] can be thought of as DFS coefficients of a periodic sequence obtained through summing periodic replicas of x[n].
      6. Deitmar, Anton; Echterhoff, Siegfried (2014), Principles of Harmonic Analysis, Universitext (2 ed.), doi:10.1007/978-3-319-05792-7, ISBN   978-3-319-05791-0
      7. 1 2 Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Pearson Education, Inc., pp. 253–257, ISBN   0-13-035399-X
      8. 1 2 3 Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces , Princeton, N.J.: Princeton University Press, ISBN   978-0-691-08078-9
      9. Katznelson, Yitzhak (1976), An introduction to harmonic analysis (Second corrected ed.), New York: Dover Publications, Inc, ISBN   0-486-63331-4
      10. Kinayman, Noyan; Aksun, M. I. (1995). "Comparative study of acceleration techniques for integrals and series in electromagnetic problems". Radio Science . 30 (6): 1713–1722. Bibcode:1995RaSc...30.1713K. doi:10.1029/95RS02060. hdl: 11693/48408 .
      11. Woodward, Philipp M. (1953). Probability and Information Theory, with Applications to Radar. Academic Press, p. 36.
      12. H. M. Edwards (1974). Riemann's Zeta Function. Academic Press, pp. 209–11. ISBN   0-486-41740-9.
      13. Cohn, Henry; Elkies, Noam (2003), "New upper bounds on sphere packings I", Ann. of Math., 2, 157 (2): 689–714, arXiv: math/0110009 , doi:10.4007/annals.2003.157.689, MR   1973059

      Further reading