# Differential operator

Last updated

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).

## Contents

This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.

## Definition

Assume that there is a map ${\displaystyle A}$ from a function space ${\displaystyle {\mathcal {F}}_{1}}$ to another function space ${\displaystyle {\mathcal {F}}_{2}}$ and a function ${\displaystyle f\in {\mathcal {F}}_{2}}$ so that ${\displaystyle f}$ is the image of ${\displaystyle u\in {\mathcal {F}}_{1}}$ i.e.,${\displaystyle f=A(u)}$. A differential operator is represented as a linear combination, finitely generated by ${\displaystyle u}$ and its derivatives containing higher degree such as

${\displaystyle P(x,D)=\sum _{|\alpha |\leq m}a_{\alpha }(x)D^{\alpha }\ ,}$

where the list ${\displaystyle \alpha =(\alpha _{1},\alpha _{2},\cdots ,\alpha _{n})}$ of non-negative integers is called a multi-index, ${\displaystyle |\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}$ is called the length of ${\displaystyle \alpha }$, ${\displaystyle a_{\alpha }(x)}$ are functions on some open domain in n-dimensional space, and ${\displaystyle D^{\alpha }=D_{1}^{\alpha _{1}}D_{2}^{\alpha _{2}}\cdots D_{n}^{\alpha _{n}}}$. The derivative above is one as functions or, sometimes, distributions or hyperfunctions and ${\textstyle D_{j}=-i{\frac {\partial }{\partial x_{j}}}}$ or sometimes, ${\textstyle D_{j}={\frac {\partial }{\partial x_{j}}}}$.

## Notations

The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:

${\displaystyle {\mathrm {d} \over \mathrm {d} x}}$, ${\displaystyle D}$, ${\displaystyle D_{x},}$ and ${\displaystyle \partial _{x}}$.

When taking higher, nth order derivatives, the operator may be written:

${\displaystyle {\mathrm {d} ^{n} \over \mathrm {d} x^{n}}}$, ${\displaystyle D^{n}}$, ${\displaystyle D_{x}^{n}}$, or ${\displaystyle \partial _{x}^{n}}$.

The derivative of a function f of an argument x is sometimes given as either of the following:

${\displaystyle [f(x)]'\,\!}$
${\displaystyle f'(x).\,\!}$

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

${\displaystyle \sum _{k=0}^{n}c_{k}D^{k}}$

in his study of differential equations.

One of the most frequently seen differential operators is the Laplacian operator, defined by

${\displaystyle \Delta =\nabla ^{2}=\sum _{k=1}^{n}{\frac {\partial ^{2}}{\partial x_{k}^{2}}}.}$

Another differential operator is the Θ operator, or theta operator, defined by [1]

${\displaystyle \Theta =z{d \over dz}.}$

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:

${\displaystyle \Theta (z^{k})=kz^{k},\quad k=0,1,2,\dots }$

In n variables the homogeneity operator is given by

${\displaystyle \Theta =\sum _{k=1}^{n}x_{k}{\frac {\partial }{\partial x_{k}}}.}$

As in one variable, the eigenspaces of Θ are the spaces of homogeneous polynomials.

In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:

${\displaystyle f{\overleftarrow {\partial _{x}}}g=g\cdot \partial _{x}f}$
${\displaystyle f{\overrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g}$
${\displaystyle f{\overleftrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g-g\cdot \partial _{x}f.}$

Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.

## Del

The differential operator del, also called nabla, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's equations. In three-dimensional Cartesian coordinates, del is defined as

${\displaystyle \nabla =\mathbf {\hat {x}} {\partial \over \partial x}+\mathbf {\hat {y}} {\partial \over \partial y}+\mathbf {\hat {z}} {\partial \over \partial z}.}$

Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.

## Adjoint of an operator

Given a linear differential operator ${\displaystyle T}$

${\displaystyle Tu=\sum _{k=0}^{n}a_{k}(x)D^{k}u}$

the adjoint of this operator is defined as the operator ${\displaystyle T^{*}}$ such that

${\displaystyle \langle Tu,v\rangle =\langle u,T^{*}v\rangle }$

where the notation ${\displaystyle \langle \cdot ,\cdot \rangle }$ is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product.

### Formal adjoint in one variable

In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by

${\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}\,g(x)\,dx,}$

where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as ${\displaystyle x\to a}$ and ${\displaystyle x\to b}$, one can also define the adjoint of T by

${\displaystyle T^{*}u=\sum _{k=0}^{n}(-1)^{k}D^{k}\left[{\overline {a_{k}(x)}}u\right].}$

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When ${\displaystyle T^{*}}$ is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

### Several variables

If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:

${\displaystyle \langle f,P^{*}g\rangle _{L^{2}(\Omega )}=\langle Pf,g\rangle _{L^{2}(\Omega )}}$

for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.

### Example

The SturmLiouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form

${\displaystyle Lu=-(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p)D^{2}u+(-p')Du+(q)u.\;\!}$

This property can be proven using the formal adjoint definition above.

{\displaystyle {\begin{aligned}L^{*}u&{}=(-1)^{2}D^{2}[(-p)u]+(-1)^{1}D[(-p')u]+(-1)^{0}(qu)\\&{}=-D^{2}(pu)+D(p'u)+qu\\&{}=-(pu)''+(p'u)'+qu\\&{}=-p''u-2p'u'-pu''+p''u+p'u'+qu\\&{}=-p'u'-pu''+qu\\&{}=-(pu')'+qu\\&{}=Lu\end{aligned}}}

This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

## Properties of differential operators

Differentiation is linear, i.e.

${\displaystyle D(f+g)=(Df)+(Dg),}$
${\displaystyle D(af)=a(Df),}$

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

${\displaystyle (D_{1}\circ D_{2})(f)=D_{1}(D_{2}(f)).}$

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:

${\displaystyle Dx-xD=1.}$

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

## Several variables

The same constructions can be carried out with partial derivatives, differentiation with respect to different variables giving rise to operators that commute (see symmetry of second derivatives).

## Ring of polynomial differential operators

### Ring of univariate polynomial differential operators

If R is a ring, let ${\displaystyle R\langle D,X\rangle }$ be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DXXD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring ${\displaystyle R\langle D,X\rangle /I}$. This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form ${\displaystyle X^{a}D^{b}{\text{ mod }}I}$. It supports an analogue of Euclidean division of polynomials.

Differential modules[ clarification needed ] over ${\displaystyle R[X]}$ (for the standard derivation) can be identified with modules over ${\displaystyle R\langle D,X\rangle /I}$.

### Ring of multivariate polynomial differential operators

If R is a ring, let ${\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle }$ be the non-commutative polynomial ring over R in the variables ${\displaystyle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}}$, and I the two-sided ideal generated by the elements

${\displaystyle (D_{i}X_{j}-X_{j}D_{i})-\delta _{i,j},\ \ \ D_{i}D_{j}-D_{j}D_{i},\ \ \ X_{i}X_{j}-X_{j}X_{i}}$

for all ${\displaystyle 1\leq i,j\leq n,}$ where ${\displaystyle \delta }$ is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring ${\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle /I}$.

This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form ${\displaystyle X_{1}^{a_{1}}\ldots X_{n}^{a_{n}}D_{1}^{b_{1}}\ldots D_{n}^{b_{n}}}$.

## Coordinate-independent description

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E). In other words, there exists a linear mapping of vector bundles

${\displaystyle i_{P}:J^{k}(E)\rightarrow F\,}$

such that

${\displaystyle P=i_{P}\circ j^{k}}$

where jk: Γ(E) Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.

This just means that for a given section s of E, the value of P(s) at a point x  M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.

### Relation to commutative algebra

An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k +1 smooth functions ${\displaystyle f_{0},\ldots ,f_{k}\in C^{\infty }(M)}$ we have

${\displaystyle [f_{k},[f_{k-1},[\cdots [f_{0},P]\cdots ]]=0.}$

Here the bracket ${\displaystyle [f,P]:\Gamma (E)\rightarrow \Gamma (F)}$ is defined as the commutator

${\displaystyle [f,P](s)=P(f\cdot s)-f\cdot P(s).\,}$

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

## Examples

${\displaystyle {\frac {\partial }{\partial z}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}-i{\frac {\partial }{\partial y}}\right)\ ,\quad {\frac {\partial }{\partial {\bar {z}}}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right)\ .}$
This approach is also used to study functions of several complex variables and functions of a motor variable.

## History

The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800. [2]

## Related Research Articles

In quantum mechanics, bra–ket notation, or Dirac notation, is ubiquitous. The notation uses the angle brackets, "" and "", and a vertical bar "", to construct "bras" and "kets".

In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is called a function, although it is not a function on the level one would expect, that is, it is not a function RC, but a function on the space of test functions. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions than classical solutions, or appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta function.

In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint: for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.

In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.

In mathematics, specifically in functional analysis, each bounded linear operator on a complex Hilbert space has a corresponding Hermitian adjoint. Adjoints of operators generalize conjugate transposes of square matrices to (possibly) infinite-dimensional situations. If one thinks of operators on a complex Hilbert space as generalized complex numbers, then the adjoint of an operator plays the role of the complex conjugate of a complex number.

In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

In mathematics, the Fréchet derivative is a derivative defined on Banach spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued function of multiple real variables, and to define the functional derivative used widely in the calculus of variations.

In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.

In mathematics, a holomorphic vector bundle is a complex vector bundle over a complex manifold X such that the total space E is a complex manifold and the projection map π : EX is holomorphic. Fundamental examples are the holomorphic tangent bundle of a complex manifold, and its dual, the holomorphic cotangent bundle. A holomorphic line bundle is a rank one holomorphic vector bundle.

Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.

In mathematics, geometric calculus extends the geometric algebra to include differentiation and integration. The formalism is powerful and can be shown to encompass other mathematical theories including differential geometry and differential forms.

In mathematics, the structure constants or structure coefficients of an algebra over a field are used to explicitly specify the product of two basis vectors in the algebra as a linear combination. Given the structure constants, the resulting product is bilinear and can be uniquely extended to all vectors in the vector space, thus uniquely determining the product for the algebra.

The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is a vector space equipped with an inner product, an operation that allows lengths and angles to be defined. Furthermore, Hilbert spaces are complete, which means that there are enough limits in the space to allow the techniques of calculus to be used.

This article summarizes several identities in exterior calculus.

Tau functions are an important ingredient in the modern theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form. The term Tau function, or -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili equation, and related integrable hierarchies. It is a central ingredient in the theory of solitons. Tau functions also appear as matrix model partition functions in the spectral theory of Random Matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers.

## References

1. E. W. Weisstein. "Theta Operator" . Retrieved 2009-06-12.
2. James Gasser (editor), A Boole Anthology: Recent and classical studies in the logic of George Boole (2000), p. 169; Google Books.