This article's tone or style may not reflect the encyclopedic tone used on Wikipedia.(February 2024) |
A generalized Fourier series is the expansion of a square integrable function into a sum of square integrable orthogonal basis functions. The standard Fourier series uses an orthonormal basis of trigonometric functions, and the series expansion is applied to periodic functions. In contrast, a generalized Fourier series uses any set of orthogonal basis functions and can apply to any square integrable function. [1] [2]
Consider a set of square-integrable complex valued functions defined on the closed interval that are pairwise orthogonal under the weighted inner product
where is a weight function and is the complex conjugate of . Then, the generalized Fourier series of a function is where the coefficients are given by
Given the space of square integrable functions defined on a given interval, one can find orthogonal bases by considering a class of boundary value problems on the interval called regular Sturm-Liouville problems. These are defined as follows, where and are real and continuous on and on , and are self-adjoint boundary conditions, and is a positive continuous functions on .
Given a regular Sturm-Liouville problem as defined above, the set of eigenfunctions corresponding to the distinct eigenvalue solutions to the problem form an orthogonal basis for with respect to the weighted inner product . [3] We also have that for a function that satisfies the boundary conditions of this Sturm-Liouville problem, the series converges uniformly to . [4]
A function defined on the entire number line is called periodic with period if a number exists such that, for any real number , the equality holds.
If a function is periodic with period , then it is also periodic with periods , , and so on. Usually, the period of a function is understood as the smallest such number . However, for some functions, arbitrarily small values of exist.
The sequence of functions is known as the trigonometric system. Any linear combination of functions of a trigonometric system, including an infinite combination (that is, a converging infinite series), is a periodic function with a period of 2π.
On any segment of length 2π (such as the segments [−π,π] and [0,2π]) the trigonometric system is an orthogonal system. This means that for any two functions of the trigonometric system, the integral of their product over a segment of length 2π is equal to zero. This integral can be treated as a scalar product in the space of functions that are integrable on a given segment of length 2π.
Let the function be defined on the segment [−π, π]. Given appropriate smoothness and differentiability conditions, may be represented on this segment as a linear combination of functions of the trigonometric system, also referred to as the expansion of the function into a trigonometric Fourier series.
The Legendre polynomials are solutions to the Sturm–Liouville eigenvalue problem
As a consequence of Sturm-Liouville theory, these polynomials are orthogonal eigenfunctions with respect to the inner product with unit weight. This can be written as a generalized Fourier series (known in this case as a Fourier–Legendre series) involving the Legendre polynomials, so that
As an example, the Fourier–Legendre series may be calculated for over . Then
and a truncated series involving only these terms would be
which differs from by approximately 0.003. In computational applications it may be advantageous to use such Fourier–Legendre series rather than Fourier series since the basis functions for the series expansion are all polynomials and hence the integrals and thus the coefficients may be easier to calculate.
Some theorems on the series coefficients include:
Bessel's inequality is a statement about the coefficients of an element in a Hilbert space with respect to an orthonormal sequence. The inequality was derived by F.W. Bessel in 1828: [5]
Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum (or integral) of the square of a function is equal to the sum (or integral) of the square of its transform. [6]
If Φ is a complete basis, then
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.
In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."
In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.
In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.
In mathematics, an almost periodic function is, loosely speaking, a function of a real number that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.
In mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:
In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation
In mathematics, the Legendre chi function is a special function whose Taylor series is also a Dirichlet series, given by
In mathematics, a trigonometric series is an infinite series of the form
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
In mathematics, particularly the field of calculus and Fourier analysis, the Fourier sine and cosine series are two mathematical series named after Joseph Fourier.
Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions. Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are also considered.
In probability theory and directional statistics, a wrapped probability distribution is a continuous probability distribution that describes data points that lie on a unit n-sphere. In one dimension, a wrapped distribution consists of points on the unit circle. If is a random variate in the interval with probability density function (PDF) , then is a circular variable distributed according to the wrapped distribution and is an angular variable in the interval distributed according to the wrapped distribution .
In mathematics, the Meixner–Pollaczek polynomials are a family of orthogonal polynomials P(λ)
n(x,φ) introduced by Meixner (1934), which up to elementary changes of variables are the same as the Pollaczek polynomialsPλ
n(x,a,b) rediscovered by Pollaczek (1949) in the case λ=1/2, and later generalized by him.