In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).
This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.
Given a nonnegative integer m, an order- linear differential operator is a map from a function space on to another function space that can be written as:
where is a multi-index of non-negative integers, , and for each , is a function on some open domain in n-dimensional space. The operator is interpreted as
Thus for a function :
The notation is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.
The polynomial p obtained by replacing partials by variables in P is called the total symbol of P; i.e., the total symbol of P above is: where The highest homogeneous component of the symbol, namely,
is called the principal symbol of P. [1] While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle). [2]
More generally, let E and F be vector bundles over a manifold X. Then the linear operator
is a differential operator of order if, in local coordinates on X, we have
where, for each multi-index α, is a bundle map, symmetric on the indices α.
The kth order coefficients of P transform as a symmetric tensor
whose domain is the tensor product of the kth symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.
The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components
on each section u of E. Here Pνμ is the scalar differential operator defined by
With this trivialization, the principal symbol can now be written
In the cotangent space over a fixed point x of X, the symbol defines a homogeneous polynomial of degree k in with values in .
A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,
This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.
The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800. [3]
The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:
When taking higher, nth order derivatives, the operator may be written:
The derivative of a function f of an argument x is sometimes given as either of the following:
The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form
in his study of differential equations.
One of the most frequently seen differential operators is the Laplacian operator, defined by
Another differential operator is the Θ operator, or theta operator, defined by [4]
This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:
In n variables the homogeneity operator is given by
As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)
In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:
Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.
Given a linear differential operator the adjoint of this operator is defined as the operator such that where the notation is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).
In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by
where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as and , one can also define the adjoint of T by
This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When is defined according to this formula, it is called the formal adjoint of T.
A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.
If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:
for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.
The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form
This property can be proven using the formal adjoint definition above. [5]
This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.
Differentiation is linear, i.e.
where f and g are functions, and a is a constant.
Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule
Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:
The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.
The differential operators also obey the shift theorem.
If R is a ring, let be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DX − XD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring . This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form . It supports an analogue of Euclidean division of polynomials.
Differential modules[ clarification needed ] over (for the standard derivation) can be identified with modules over .
If R is a ring, let be the non-commutative polynomial ring over R in the variables , and I the two-sided ideal generated by the elements
for all where is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring .
This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form .
In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E). In other words, there exists a linear mapping of vector bundles
such that
where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.
This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.
An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions we have
Here the bracket is defined as the commutator
This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.
A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.
A differential operator acting on two functions is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra. [6]
A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle. [7]
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.
Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.
In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
In mathematics, specifically in operator theory, each linear operator on an inner product space defines a Hermitian adjoint operator on that space according to the rule
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
In mathematics, the interior product is a degree −1 (anti)derivation on the exterior algebra of differential forms on a smooth manifold. The interior product, named in opposition to the exterior product, should not be confused with an inner product. The interior product is sometimes written as
In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.
In mathematics, a holomorphic vector bundle is a complex vector bundle over a complex manifold X such that the total space E is a complex manifold and the projection map π : E → X is holomorphic. Fundamental examples are the holomorphic tangent bundle of a complex manifold, and its dual, the holomorphic cotangent bundle. A holomorphic line bundle is a rank one holomorphic vector bundle.
In mathematics, in particular in differential geometry, mathematical physics, and representation theory, a Weitzenböck identity, named after Roland Weitzenböck, expresses a relationship between two second-order elliptic operators on a manifold with the same principal symbol. Usually Weitzenböck formulae are implemented for G-invariant self-adjoint operators between vector bundles associated to some principal G-bundle, although the precise conditions under which such a formula exists are difficult to formulate. This article focuses on three examples of Weitzenböck identities: from Riemannian geometry, spin geometry, and complex analysis.
In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
In mathematics, Gårding's inequality is a result that gives a lower bound for the bilinear form induced by a real linear elliptic partial differential operator. The inequality is named after Lars Gårding.
In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.