Square-integrable function

Last updated

In mathematics, a square-integrable function, also called a quadratically integrable function or function or square-summable function, [1] is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line is defined as follows.

Contents

One may also speak of quadratic integrability over bounded intervals such as for . [2]

An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable. For this to be true, the integrals of the positive and negative portions of the real part must both be finite, as well as those for the imaginary part.

The vector space of (equivalence classes of) square integrable functions (with respect to Lebesgue measure) forms the space with Among the spaces, the class of square integrable functions is unique in being compatible with an inner product, which allows notions like angle and orthogonality to be defined. Along with this inner product, the square integrable functions form a Hilbert space, since all of the spaces are complete under their respective -norms.

Often the term is used not to refer to a specific function, but to equivalence classes of functions that are equal almost everywhere.

Properties

The square integrable functions (in the sense mentioned in which a "function" actually means an equivalence class of functions that are equal almost everywhere) form an inner product space with inner product given by where

Since , square integrability is the same as saying

It can be shown that square integrable functions form a complete metric space under the metric induced by the inner product defined above. A complete metric space is also called a Cauchy space, because sequences in such metric spaces converge if and only if they are Cauchy. A space that is complete under the metric induced by a norm is a Banach space. Therefore, the space of square integrable functions is a Banach space, under the metric induced by the norm, which in turn is induced by the inner product. As we have the additional property of the inner product, this is specifically a Hilbert space, because the space is complete under the metric induced by the inner product.

This inner product space is conventionally denoted by and many times abbreviated as Note that denotes the set of square integrable functions, but no selection of metric, norm or inner product are specified by this notation. The set, together with the specific inner product specify the inner product space.

The space of square integrable functions is the space in which

Examples

The function defined on is in for but not for [1] The function defined on is square-integrable. [3]

Bounded functions, defined on are square-integrable. These functions are also in for any value of [3]

Non-examples

The function defined on where the value at is arbitrary. Furthermore, this function is not in for any value of in [3]

See also

Related Research Articles

In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".

A generalized Fourier series is the expansion of a square integrable function into a sum of square integrable orthogonal basis functions. The standard Fourier series uses an orthonormal basis of trigonometric functions, and the series expansion is applied to periodic functions. In contrast, a generalized Fourier series uses any set of orthogonal basis functions and can apply to any square integrable function.

In mathematics, a function between two complex vector spaces is said to be antilinear or conjugate-linear if hold for all vectors and every complex number where denotes the complex conjugate of

<span class="mw-page-title-main">Parallelogram law</span> Sum of the squares of all 4 sides of a parallelogram equals that of the 2 diagonals

In mathematics, the simplest form of the parallelogram law belongs to elementary geometry. It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the squares of the lengths of the two diagonals. We use these notations for the sides: AB, BC, CD, DA. But since in Euclidean geometry a parallelogram necessarily has opposite sides equal, that is, AB = CD and BC = DA, the law can be stated as

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span> In functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Specifically, a Hilbert space of functions from a set is an RKHS if, for each , there exists a function such that for all ,

In mathematics, specifically in operator theory, each linear operator on an inner product space defines a Hermitian adjoint operator on that space according to the rule

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

<span class="mw-page-title-main">Polarization identity</span> Formula relating the norm and the inner product in a inner product space

In linear algebra, a branch of mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space. If a norm arises from an inner product then the polarization identity can be used to express this inner product entirely in terms of the norm. The polarization identity shows that a norm can arise from at most one inner product; however, there exist norms that do not arise from any inner product.

In mathematics, weak convergence in a Hilbert space is the convergence of a sequence of points in the weak topology.

In mathematics, in the field of functional analysis, an indefinite inner product space

In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space. A Hilbert space is a special case of a Banach space.

In mathematics, the Frobenius inner product is a binary operation that takes two matrices and returns a scalar. It is often denoted . The operation is a component-wise inner product of two matrices as though they are vectors, and satisfies the axioms for an inner product. The two matrices must have the same dimension—same number of rows and columns—but are not restricted to be square matrices.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

  1. 1 2 Todd, Rowland. "L^2-Function". MathWorld--A Wolfram Web Resource.
  2. Giovanni Sansone (1991). Orthogonal Functions. Dover Publications. pp. 1–2. ISBN   978-0-486-66730-0.
  3. 1 2 3 "Lp Functions" (PDF). Archived from the original (PDF) on 2020-10-24. Retrieved 2020-01-16.