Almost periodic function

Last updated

In mathematics, an almost periodic function is, loosely speaking, a function of a real number that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.

Contents

Almost periodicity is a property of dynamical systems that appear to retrace their paths through phase space, but not exactly. An example would be a planetary system, with planets in orbits moving with periods that are not commensurable (i.e., with a period vector that is not proportional to a vector of integers). A theorem of Kronecker from diophantine approximation can be used to show that any particular configuration that occurs once, will recur to within any specified accuracy: if we wait long enough we can observe the planets all return to within a second of arc to the positions they once were in.

Motivation

There are several inequivalent definitions of almost periodic functions. The first was given by Harald Bohr. His interest was initially in finite Dirichlet series. In fact by truncating the series for the Riemann zeta function ζ(s) to make it finite, one gets finite sums of terms of the type

with s written as (σ + it) the sum of its real part σ and imaginary part it. Fixing σ, so restricting attention to a single vertical line in the complex plane, we can see this also as

Taking a finite sum of such terms avoids difficulties of analytic continuation to the region σ < 1. Here the 'frequencies' log n will not all be commensurable (they are as linearly independent over the rational numbers as the integers n are multiplicatively independent which comes down to their prime factorizations).

With this initial motivation to consider types of trigonometric polynomial with independent frequencies, mathematical analysis was applied to discuss the closure of this set of basic functions, in various norms.

The theory was developed using other norms by Besicovitch, Stepanov, Weyl, von Neumann, Turing, Bochner and others in the 1920s and 1930s.

Uniform or Bohr or Bochner almost periodic functions

Bohr (1925) [1] defined the uniformly almost-periodic functions as the closure of the trigonometric polynomials with respect to the uniform norm

(on bounded functions f on R). In other words, a function f is uniformly almost periodic if for every ε > 0 there is a finite linear combination of sine and cosine waves that is of distance less than ε from f with respect to the uniform norm. The sine and cosine frequencies can be arbitrary real numbers. Bohr proved that this definition was equivalent to the existence of a relatively dense set of ε almost-periods, for all ε > 0: that is, translations T(ε) = T of the variable t making

An alternative definition due to Bochner (1926) is equivalent to that of Bohr and is relatively simple to state:

A function f is almost periodic if every sequence {ƒ(t + Tn)} of translations of f has a subsequence that converges uniformly for t in (, +).

The Bohr almost periodic functions are essentially the same as continuous functions on the Bohr compactification of the reals.

Stepanov almost periodic functions

The space Sp of Stepanov almost periodic functions (for p  1) was introduced by V.V. Stepanov (1925). [2] It contains the space of Bohr almost periodic functions. It is the closure of the trigonometric polynomials under the norm

for any fixed positive value of r; for different values of r these norms give the same topology and so the same space of almost periodic functions (though the norm on this space depends on the choice of r).

Weyl almost periodic functions

The space Wp of Weyl almost periodic functions (for p  1) was introduced by Weyl (1927). [3] It contains the space Sp of Stepanov almost periodic functions. It is the closure of the trigonometric polynomials under the seminorm

Warning: there are nonzero functions ƒ with ||ƒ||W,p = 0, such as any bounded function of compact support, so to get a Banach space one has to quotient out by these functions.

Besicovitch almost periodic functions

The space Bp of Besicovitch almost periodic functions was introduced by Besicovitch (1926). [4] It is the closure of the trigonometric polynomials under the seminorm

Warning: there are nonzero functions ƒ with ||ƒ||B,p = 0, such as any bounded function of compact support, so to get a Banach space one has to quotient out by these functions.

The Besicovitch almost periodic functions in B2 have an expansion (not necessarily convergent) as

with Σa2
n
finite and λn real. Conversely every such series is the expansion of some Besicovitch periodic function (which is not unique).

The space Bp of Besicovitch almost periodic functions (for p  1) contains the space Wp of Weyl almost periodic functions. If one quotients out a subspace of "null" functions, it can be identified with the space of Lp functions on the Bohr compactification of the reals.

Almost periodic functions on a locally compact group

With these theoretical developments and the advent of abstract methods (the Peter–Weyl theorem, Pontryagin duality and Banach algebras) a general theory became possible. The general idea of almost-periodicity in relation to a locally compact abelian group G becomes that of a function F in L(G), such that its translates by G form a relatively compact set. Equivalently, the space of almost periodic functions is the norm closure of the finite linear combinations of characters of G. If G is compact the almost periodic functions are the same as the continuous functions.

The Bohr compactification of G is the compact abelian group of all possibly discontinuous characters of the dual group of G, and is a compact group containing G as a dense subgroup. The space of uniform almost periodic functions on G can be identified with the space of all continuous functions on the Bohr compactification of G. More generally the Bohr compactification can be defined for any topological group G, and the spaces of continuous or Lp functions on the Bohr compactification can be considered as almost periodic functions on G. For locally compact connected groups G the map from G to its Bohr compactification is injective if and only if G is a central extension of a compact group, or equivalently the product of a compact group and a finite-dimensional vector space.

A function on a locally compact group is called weakly almost periodic if its orbit is weakly relatively compact in .

Given a topological dynamical system consisting of a compact topological space X with an action of the locally compact group G, a continuous function on X is (weakly) almost periodic if its orbit is (weakly) precompact in the Banach space .

Quasiperiodic signals in audio and music synthesis

In speech processing, audio signal processing, and music synthesis, a quasiperiodic signal, sometimes called a quasiharmonic signal, is a waveform that is virtually periodic microscopically, but not necessarily periodic macroscopically. This does not give a quasiperiodic function in the sense of the Wikipedia article of that name, but something more akin to an almost periodic function, being a nearly periodic function where any one period is virtually identical to its adjacent periods but not necessarily similar to periods much farther away in time. This is the case for musical tones (after the initial attack transient) where all partials or overtones are harmonic (that is all overtones are at frequencies that are an integer multiple of a fundamental frequency of the tone).

When a signal is fully periodic with period , then the signal exactly satisfies

or

The Fourier series representation would be

or

where is the fundamental frequency and the Fourier coefficients are

where can be any time: .

The fundamental frequency , and Fourier coefficients , , , or , are constants, i.e. they are not functions of time. The harmonic frequencies are exact integer multiples of the fundamental frequency.

When is quasiperiodic then

or

where

Now the Fourier series representation would be

or

or

where is the possibly time-varying fundamental frequency and the time-varying Fourier coefficients are

and the instantaneous frequency for each partial is

Whereas in this quasiperiodic case, the fundamental frequency , the harmonic frequencies , and the Fourier coefficients , , , or are not necessarily constant, and are functions of time albeit slowly varying functions of time. Stated differently these functions of time are bandlimited to much less than the fundamental frequency for to be considered to be quasiperiodic.

The partial frequencies are very nearly harmonic but not necessarily exactly so. The time-derivative of , that is , has the effect of detuning the partials from their exact integer harmonic value . A rapidly changing means that the instantaneous frequency for that partial is severely detuned from the integer harmonic value which would mean that is not quasiperiodic.

See also

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

<span class="mw-page-title-main">Phase (waves)</span> The elapsed fraction of a cycle of a periodic function

In physics and mathematics, the phase of a wave or other periodic function of some real variable is an angle-like quantity representing the fraction of the cycle covered up to . It is expressed in such a scale that it varies by one full turn as the variable goes through each period. It may be measured in any angular unit such as degrees or radians, thus increasing by 360° or as the variable completes a full period.

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Short-time Fourier transform</span> Fourier-related transform suited to signals that change rather quickly in time

The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.

<span class="mw-page-title-main">Theta function</span> Special functions of several complex variables

In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. As Grassmann algebras, they appear in quantum field theory.

In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi. Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In signal processing, time–frequency analysis comprises those techniques that study a signal in both the time and frequency domains simultaneously, using various time–frequency representations. Rather than viewing a 1-dimensional signal and some transform, time–frequency analysis studies a two-dimensional signal – a function whose domain is the two-dimensional real plane, obtained from the signal via a time–frequency transform.

In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

<span class="mw-page-title-main">Instantaneous phase and frequency</span> Electrical engineering concept

Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function:

A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.

<span class="mw-page-title-main">Wigner distribution function</span>

The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.

In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.

In mathematics, lifting theory was first introduced by John von Neumann in a pioneering paper from 1931, in which he answered a question raised by Alfréd Haar. The theory was further developed by Dorothy Maharam (1958) and by Alexandra Ionescu Tulcea and Cassius Ionescu Tulcea (1961). Lifting theory was motivated to a large extent by its striking applications. Its development up to 1969 was described in a monograph of the Ionescu Tulceas. Lifting theory continued to develop since then, yielding new results and applications.

In physics and mathematics, the spacetime triangle diagram (STTD) technique, also known as the Smirnov method of incomplete separation of variables, is the direct space-time domain method for electromagnetic and scalar wave motion.

References

  1. H. Bohr, "Zur Theorie der fastperiodischen Funktionen I" Acta Math., 45 (1925) pp. 29–127
  2. W. Stepanoff(=V.V. Stepanov), "Sur quelques généralisations des fonctions presque périodiques" C. R. Acad. Sci. Paris, 181 (1925) pp. 90–92; W. Stepanoff(=V.V. Stepanov), "Ueber einige Verallgemeinerungen der fastperiodischen Funktionen" Math. Ann., 45 (1925) pp. 473–498
  3. H. Weyl, "Integralgleichungen und fastperiodische Funktionen" Math. Ann., 97 (1927) pp. 338–356
  4. A.S. Besicovitch, "On generalized almost periodic functions" Proc. London Math. Soc. (2), 25 (1926) pp. 495–512

Bibliography