This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: There are no references in-text. (September 2020) (Learn how and when to remove this template message) |
In mathematics, the term functional (as a noun) has at least three meanings.
This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the article on higher-order functions.
Commonly, the space is a space of functions. In this case, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function". However, the fact that is a space of functions is not mathematically essential, so this older definition is no longer prevalent.^{[ citation needed ]}
The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian.
The mapping
is a function, where x_{0} is an argument of a function f. At the same time, the mapping of a function to the value of the function at a point
is a functional; here, x_{0} is a parameter.
Provided that f is a linear function from a vector space to the underlying scalar field, the above linear maps are dual to each other, and in functional analysis both are called linear functionals.
Integrals such as
form a special class of functionals. They map a function into a real number, provided that is real-valued. Examples include
Given an inner product space , and a fixed vector , the map defined by is a linear functional on . The set of vectors such that is zero is a vector subspace of , called the null space or kernel of the functional, or the orthogonal complement of , denoted .
For example, taking the inner product with a fixed function defines a (linear) functional on the Hilbert space of square integrable functions on :
If a functional's value can be computed for small segments of the input curve and then summed to find the total value, the functional is called local. Otherwise it is called non-local. For example:
is local while
is non-local. This occurs commonly when integrals occur separately in the numerator and denominator of an equation such as in calculations of center of mass.
The traditional usage also applies when one talks about a functional equation, meaning an equation between functionals: an equation F = G between functionals can be read as an 'equation to solve', with solutions being themselves functions. In such equations there may be several sets of variable unknowns, like when it is said that an additive function f is one satisfying the functional equation
Functional derivatives are used in Lagrangian mechanics. They are derivatives of functionals: i.e. they carry information on how a functional changes when the input function changes by a small amount.
Richard Feynman used functional integrals as the central idea in his sum over the histories formulation of quantum mechanics. This usage implies an integral taken over some function space.
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. If a linear map is a bijection then it is called a linear isomorphism.
In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space. There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to be explicitly characterized, and may be extended to related objects. See Operator (physics) for other examples.
In mathematics, weak topology is an alternative term for certain initial topologies, often on topological vector spaces or spaces of linear operators, for instance on a Hilbert space. The term is most commonly used for the initial topology of a topological vector space with respect to its continuous dual. The remainder of this article will deal with this case, which is one of the concepts of functional analysis.
In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.
In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes a function into its constituent frequencies, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time.
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis.
In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint: for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A^{∗}. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In the mathematical field of differential geometry, one definition of a metric tensor is a type of function which takes as input a pair of tangent vectors v and w at a point of a surface and produces a real number scalar g(v, w) in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of and angle between tangent vectors. Through integration, the metric tensor allows one to define and compute the length of curves on the manifold.
In linear algebra, a linear form is a linear map from a vector space to its field of scalars. In ℝ^{n}, if vectors are represented as column vectors, then linear functionals are represented as row vectors, and their action on vectors is given by the matrix product with the row vector on the left and the column vector on the right. In general, if V is a vector space over a field k, then a linear functional f is a function from V to k that is linear:
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero, and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.
In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function s → s^{2} to T yields the operator T^{2}. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential
In mathematics, particularly in functional analysis, a projection-valued measure (PVM) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. Projection-valued measures are formally similar to real-valued measures, except that their values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.
The Wheeler–DeWitt equation is a field equation. It is part of a theory that attempts to combine mathematically the ideas of quantum mechanics and general relativity, a step towards a theory of quantum gravity. In this approach, time plays a role different from what it does in non-relativistic quantum mechanics, leading to the so-called 'problem of time'. More specifically, the equation describes the quantum version of the Hamiltonian constraint using metric variables. Its commutation relations with the diffeomorphism constraints generate the Bergman–Komar "group".
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers ℝ, or a subset of ℝ that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.
In theoretical physics, scalar field theory can refer to a relativistically invariant classical or quantum theory of scalar fields. A scalar field is invariant under any Lorentz transformation.
In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures. In a sense, "disintegration" is the opposite process to the construction of a product measure.
In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems. The method is a generalization of the classical method of Lagrange multipliers as used to find extrema of a function of finitely many variables.
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.
In mathematics, convenient vector spaces are locally convex vector spaces satisfying a very mild completeness condition.