Generalized Fourier series

Last updated

In mathematics, a generalized Fourier series expands a square-integrable function defined on an interval over the real line. The constituent functions in the series expansion form an orthonormal basis of an inner product space. While a Fourier series expansion consists of only trigonometric functions, a generalized Fourier series is a decomposition involving any set of functions that satisfy the Sturm-Liouville eigenvalue problem. These expansions find common use in interpolation theory. [1] It is expressed by a series of sinusoids that can be stated in various forms. In essence, a pair of functions is considered, where t is a variable (usually time), and m and n are real multipliers of t, reflecting the length of the interval.

Contents

Definition

Consider a set of square-integrable functions with values in or , which are pairwise orthogonal under the inner product where is a weight function, and represents complex conjugation, i.e., for .

The generalized Fourier series of a square-integrable function , with respect to Φ, is then where the coefficients are given by

If Φ is a complete set, i.e., an orthogonal basis of the space of all square-integrable functions on [a, b], as opposed to a smaller orthogonal set, the relation becomes equality in the L2 sense, more precisely modulo (not necessarily pointwise, nor almost everywhere).

Example (Fourier–Legendre series)

1. Trigonometric system.

Definition: a function defined on the entire number line is called periodic if there is a number such that . The number is called the period of the function.

Note that if the number is the period of the function, then the numbers are also the periods of this function. Usually, the period of a function is understood as its smallest period (if it exists, see point 1 in test questions and assignments).

Sequence of functions:

This is the sequence of functions called the trigonometric system. Any linear combination of functions of a trigonometric system, including an infinite combination (that is, a series if it converges), is a periodic function with a period of 2π.

In the following we will consider the trigonometric system, as a rule, on the segment [−π, π], sometimes on the segment [0, 2π]. On any segment of length 2π (including on the segments [−π,π] and [0,2π]) the trigonometric system is an orthogonal system. This means that for any two functions of the trigonometric system, the integral of their product over a segment of length 2π is equal to zero. This integral can be treated as a scalar product in the space of functions that are integrable on a given segment of length 2π.

2. Fourier coefficients.

Let the function be defined on the segment [−π, π]. Below in Theorem 1, we will indicate sufficient conditions under which the function can be represented on this segment as a linear combination of functions of the trigonometric system, also referred to as expansion of the function into a trigonometric Fourier series (converging to at all points of the segment [−π,π] except, perhaps, for a finite number of points).

The Legendre polynomials are solutions to the Sturm–Liouville problem

As a consequence of Sturm-Liouville theory, these polynomials are orthogonal eigenfunctions with respect to the inner product above with unit weight. We can form a generalized Fourier series (known as a Fourier–Legendre series) involving the Legendre polynomials, and

As an example, we may calculate the Fourier–Legendre series for over . We have that,

and a series involving these terms would be

which differ from by approximately 0.003. It may be advantageous to use such Fourier–Legendre series since the eigenfunctions are all polynomials and hence the integrals and thus the coefficients are easier to calculate.

Coefficient theorems

Some theorems on the coefficients include:

Bessel's inequality

Parseval's theorem

If Φ is a complete set, then

See also

Related Research Articles

<span class="mw-page-title-main">Trigonometric functions</span> Functions of an angle

In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Legendre polynomials</span> System of complete and orthogonal polynomials

In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.

In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.

<span class="mw-page-title-main">Inverse trigonometric functions</span> Inverse functions of sin, cos, tan, etc.

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

In mathematics, more specifically in harmonic analysis, Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function—just like trigonometric functions can be used to represent any continuous function in Fourier analysis. They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on the unit interval. But unlike the sine and cosine functions, which are continuous, Walsh functions are piecewise constant. They take the values −1 and +1 only, on sub-intervals defined by dyadic fractions.

In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.

In mathematics, an almost periodic function is, loosely speaking, a function of a real number that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.

In mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:

<span class="mw-page-title-main">Sinc function</span> Special mathematical function defined as sin(x)/x

In mathematics, physics and engineering, the sinc function, denoted by sinc(x), has two forms, normalized and unnormalized.

<span class="mw-page-title-main">Trigonometric series</span> Infinite sum of sines and cosines

In mathematics, a trigonometric series is an infinite series of the form

Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables and use a discrete cosine transform (DCT) approximation for the cosine series. Besides having fast-converging accuracy comparable to Gaussian quadrature rules, Clenshaw–Curtis quadrature naturally leads to nested quadrature rules, which is important for both adaptive quadrature and multidimensional quadrature (cubature).

In mathematics, particularly the field of calculus and Fourier analysis, the Fourier sine and cosine series are two mathematical series named after Joseph Fourier.

<span class="mw-page-title-main">Dirichlet kernel</span> Concept in mathematical analysis

In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as

In mathematics, the Meixner–Pollaczek polynomials are a family of orthogonal polynomials P(λ)
n
(x,φ) introduced by Meixner (1934), which up to elementary changes of variables are the same as the Pollaczek polynomialsPλ
n
(x,a,b) rediscovered by Pollaczek (1949) in the case λ=1/2, and later generalized by him.

References

  1. Howell, Kenneth B. (2001-05-18). Principles of Fourier Analysis. Boca Raton: CRC Press. doi:10.1201/9781420036909. ISBN   978-0-429-12941-4.