Poisson summation formula

Last updated

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

Contents

Forms of the equation

Consider an aperiodic function with Fourier transform alternatively designated by and

The basic Poisson summation formula is: [1]

  (Eq.1)

Also consider periodic functions, where parameters and are in the same units as :

Then Eq.1 is a special case (P=1, x=0) of this generalization: [2] [3]

  (Eq.2)

which is a Fourier series expansion with coefficients that are samples of the function Similarly:

  (Eq.3)

also known as the important Discrete-time Fourier transform .

Derivations

A proof may be found in either Pinsky [2] or Zygmund. [3]   Eq.2 , for instance, holds in the sense that if , then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. It follows from the dominated convergence theorem that exists and is finite for almost every . Furthermore it follows that is integrable on any interval of length So it is sufficient to show that the Fourier series coefficients of are Proceeding from the definition of the Fourier coefficients we have:

where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables () this becomes:

The proof of Eq.3 is of course similar, except that the formula for series coefficients in frequency is:

The sign of    is positive, because it has to be the opposite of the one on the right-hand side of Eq.3 .

Distributional formulation

These equations can be interpreted in the language of distributions [4] [5] :§7.2 for a function whose derivatives are all rapidly decreasing (see Schwartz function). The Poisson summation formula arises as a particular case of the Convolution Theorem on tempered distributions, using the Dirac comb distribution and its Fourier series:

In other words, the periodization of a Dirac delta resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments.

For the case Eq.1 readily follows:

Similarly:

Or: [6] :143


The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as [7]

Applicability

Eq.2 holds provided is a continuous integrable function which satisfies for some and every [8] [9] Note that such is uniformly continuous, this together with the decay assumption on , show that the series defining converges uniformly to a continuous function.   Eq.2 holds in the strong sense that both sides converge uniformly and absolutely to the same limit. [9]

Eq.2 holds in a pointwise sense under the strictly weaker assumption that has bounded variation and [3] The Fourier series on the right-hand side of Eq.2 is then understood as a (conditionally convergent) limit of symmetric partial sums.

As shown above, Eq.2 holds under the much less restrictive assumption that is in , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of [3] In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2 , case holds under the less restrictive conditions that is integrable and 0 is a point of continuity of . However Eq.2 may fail to hold even when both and are integrable and continuous, and the sums converge absolutely. [10]

Applications

Method of images

In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions. [8] In one dimension, the resulting solution is called a theta function.

In electrodynamics, the method is also used to accelerate the computation of periodic Green's functions. [11]

Sampling

In the statistical study of time-series, if is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function is band-limited, meaning that there is some cutoff frequency such that is zero for frequencies exceeding the cutoff: for For band-limited functions, choosing the sampling rate guarantees that no information is lost: since can be reconstructed from these sampled values. Then, by Fourier inversion, so can This leads to the Nyquist–Shannon sampling theorem. [2]

Ewald summation

Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. [12] (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.

Approximations of integrals

The Poisson summation formula is also useful to bound the errors obtained when an integral is approximated by a (Riemann) sum. Consider an approximation of as , where is the size of the bin. Then, according to Eq.2 this approximation coincides with . The error in the approximation can then be bounded as . This is particularly useful when the Fourier transform of is rapidly decaying if .

Lattice points inside a sphere

The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points inside a large Euclidean sphere. It can also be used to show that if an integrable function, and both have compact support then [2]

Number theory

In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function. [13]

One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians . Put , for a complex number in the upper half plane, and define the theta function:

The relation between and turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing and using the fact that one can conclude:

by putting

It follows from this that has a simple transformation property under and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares.

Sphere packings

Cohn & Elkies [14] proved an upper bound on the density of sphere packings using the Poisson summation formula, which subsequently led to a proof of optimal sphere packings in dimension 8 and 24.

Other

Generalizations

The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let be the lattice in consisting of points with integer coordinates. For a function in , consider the series given by summing the translates of by elements of :

Theorem For in , the above series converges pointwise almost everywhere, and defines a -periodic function on , hence a function on the torus a.e.  lies in with
Moreover, for all in  

(the Fourier transform of on the torus ) equals

(the Fourier transform of on ).

When is in addition continuous, and both and decay sufficiently fast at infinity, then one can "invert" the Fourier series back to their domain and make a stronger statement. More precisely, if

for some C, δ > 0, then [9] :VII §2 where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives Eq.1 above.

More generally, a version of the statement holds if Λ is replaced by a more general lattice in a finite dimensional vectorspace . Choose a translation invariant measure on . It is unique up to positive scalar. Again for a function we define the periodisation

as above.

The dual lattice is defined as a subset of the dual vector space that evaluates to integers on the lattice or alternatively, by Pontryagin duality, as the characters of that contain in the kernel. Then the statement is that for all the Fourier transform of the periodisation as a function on and the Fourier transform of on itself are related by proper normalisation

Note that the right hand side is independent of the choice of invariant measure . If and are continuous and tend to zero faster than then

In particular


This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.

Selberg trace formula

Further generalization to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.

A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups with a discrete subgroup such that has finite volume. For example, can be the real points of and can be the integral points of . In this setting, plays the role of the real number line in the classical version of Poisson summation, and plays the role of the integers that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of Eq.1 becomes a sum over irreducible unitary representations of , and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of , and is called "the geometric side."

The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory.

Convolution theorem

The Poisson summation formula is a particular case of the convolution theorem on tempered distributions. If one of the two factors is the Dirac comb, one obtains periodic summation on one side and sampling on the other side of the equation. Applied to the Dirac delta function and its Fourier transform, the function that is constantly 1, this yields the Dirac comb identity.

See also

      Related Research Articles

      <span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

      Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of .

      <span class="mw-page-title-main">Discrete Fourier transform</span> Type of Fourier transform in discrete mathematics

      In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

      <span class="mw-page-title-main">Fourier analysis</span> Branch of mathematics

      In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

      <span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

      In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

      In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions is the product of their Fourier transforms. More generally, convolution in one domain equals point-wise multiplication in the other domain. Other versions of the convolution theorem are applicable to various Fourier-related transforms.

      <span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

      A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

      <span class="mw-page-title-main">Fabry–Pérot interferometer</span> Optical device with parallel mirrors

      In optics, a Fabry–Pérot interferometer (FPI) or etalon is an optical cavity made from two parallel reflecting surfaces. Optical waves can pass through the optical cavity only when they are in resonance with it. It is named after Charles Fabry and Alfred Perot, who developed the instrument in 1899. Etalon is from the French étalon, meaning "measuring gauge" or "standard".

      In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".

      <i>j</i>-invariant Modular function in mathematics

      In mathematics, Felix Klein's j-invariant or j function, regarded as a function of a complex variable τ, is a modular function of weight zero for special linear group SL(2, Z) defined on the upper half-plane of complex numbers. It is the unique such function that is holomorphic away from a simple pole at the cusp such that

      In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz.

      In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.

      <span class="mw-page-title-main">Dirac comb</span> Periodic distribution ("function") of "point-mass" Dirac delta sampling

      In mathematics, a Dirac comb is a periodic function with the formula for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb, hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.

      <span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

      In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

      <span class="mw-page-title-main">Lemniscate constant</span> Ratio of the perimeter of Bernoullis lemniscate to its diameter

      In mathematics, the lemniscate constantϖ is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of π for the circle. Equivalently, the perimeter of the lemniscate is 2ϖ. The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. It also appears in evaluation of the gamma and beta function at certain rational values. The symbol ϖ is a cursive variant of π; see Pi § Variant pi.

      <span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

      In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

      In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

      In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.

      Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.

      In probability theory and statistics, the Conway–Maxwell–binomial (CMB) distribution is a three parameter discrete probability distribution that generalises the binomial distribution in an analogous manner to the way that the Conway–Maxwell–Poisson distribution generalises the Poisson distribution. The CMB distribution can be used to model both positive and negative association among the Bernoulli summands,.

      In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let k be the base field, f be an automorphic form over k, π be the representation associated via the Jacquet–Langlands correspondence with f. Goro Shimura (1976) proved this formula, when and f is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and f is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

      References

      1. Darmon, Henri (Oct 2011). "Fourier transforms and Poisson summation... Theorem 5" (PDF). math.mcgill.ca. p. 2. Retrieved 2024-10-01.
      2. 1 2 3 4 Pinsky, M. (2002), Introduction to Fourier Analysis and Wavelets., Brooks Cole, ISBN   978-0-534-37660-4
      3. 1 2 3 4 Zygmund, Antoni (1968), Trigonometric Series (2nd ed.), Cambridge University Press (published 1988), ISBN   978-0-521-35885-9
      4. Córdoba, A., "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376
      5. Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN   3-540-12104-8, MR   0717035
      6. Oppenheim, Alan V.; Schafer, Ronald W.; Buck, John R. (1999). Discrete-time signal processing (2nd ed.). Upper Saddle River, N.J.: Prentice Hall. ISBN   0-13-754920-2. samples of the Fourier transform of an aperiodic sequence x[n] can be thought of as DFS coefficients of a periodic sequence obtained through summing periodic replicas of x[n].
      7. Deitmar, Anton; Echterhoff, Siegfried (2014), Principles of Harmonic Analysis, Universitext (2 ed.), doi:10.1007/978-3-319-05792-7, ISBN   978-3-319-05791-0
      8. 1 2 Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Pearson Education, Inc., pp. 253–257, ISBN   0-13-035399-X
      9. 1 2 3 Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces , Princeton, N.J.: Princeton University Press, ISBN   978-0-691-08078-9
      10. Katznelson, Yitzhak (1976), An introduction to harmonic analysis (Second corrected ed.), New York: Dover Publications, Inc, ISBN   0-486-63331-4
      11. Kinayman, Noyan; Aksun, M. I. (1995). "Comparative study of acceleration techniques for integrals and series in electromagnetic problems". Radio Science . 30 (6): 1713–1722. Bibcode:1995RaSc...30.1713K. doi:10.1029/95RS02060. hdl: 11693/48408 .
      12. Woodward, Philipp M. (1953). Probability and Information Theory, with Applications to Radar. Academic Press, p. 36.
      13. H. M. Edwards (1974). Riemann's Zeta Function. Academic Press, pp. 209–11. ISBN   0-486-41740-9.
      14. Cohn, Henry; Elkies, Noam (2003), "New upper bounds on sphere packings I", Ann. of Math., 2, 157 (2): 689–714, arXiv: math/0110009 , doi:10.4007/annals.2003.157.689, MR   1973059

      Further reading