This article includes a list of general references, but it lacks sufficient corresponding inline citations .(September 2024) |
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations.
Let be a linear differential operator of order m on a domain in Rn given by where denotes a multi-index, and denotes the partial derivative of order in .
Then is called elliptic if for every x in and every non-zero in Rn, where .
In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k: where C is a positive constant. Note that ellipticity only depends on the highest-order terms. [1]
A nonlinear operator is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator.
Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality, Lax–Milgram lemma and Fredholm alternative, states the sufficient condition for a weak solution u to exist in the Sobolev space Hk.
For example, for a Second-order Elliptic operator as in Example 2,
This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense.
The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u.
For L as in Example 2,
Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0.
As an application, suppose a function satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that is smooth.
For L as in Example 2 on U, which is an open domain with C1 boundary, then there is a number γ>0 such that for each μ>γ, satisfies the assumptions of Lax–Milgram lemma.
Let be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol with respect to a one-form . (Basically, what we are doing is replacing the highest order covariant derivatives by vector fields .)
We say is weakly elliptic if is a linear isomorphism for every non-zero .
We say is (uniformly) strongly elliptic if for some constant ,
for all and all .
The definition of ellipticity in the previous part of the article is strong ellipticity. Here is an inner product. Notice that the are covector fields or one-forms, but the are elements of the vector bundle upon which acts.
The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic.
Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity.
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.
In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: where is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:where may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
In multivariable calculus, the directional derivative measures the rate at which a function changes in a particular direction at a given point.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
In mathematics, the interior product is a degree −1 (anti)derivation on the exterior algebra of differential forms on a smooth manifold. The interior product, named in opposition to the exterior product, should not be confused with an inner product. The interior product is sometimes written as
In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology (named after Pierre Dolbeault) is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).
In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
In mathematics, Gårding's inequality is a result that gives a lower bound for the bilinear form induced by a real linear elliptic partial differential operator. The inequality is named after Lars Gårding.
In Riemannian geometry and pseudo-Riemannian geometry, the Gauss–Codazzi equations are fundamental formulas that link together the induced metric and second fundamental form of a submanifold of a Riemannian or pseudo-Riemannian manifold.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943).
In mathematics, and more precisely, in functional Analysis and PDEs, the Schauder estimates are a collection of results due to Juliusz Schauder concerning the regularity of solutions to linear, uniformly elliptic partial differential equations. The estimates say that when the equation has appropriately smooth terms and appropriately smooth solutions, then the Hölder norm of the solution can be controlled in terms of the Hölder norms for the coefficient and source terms. Since these estimates assume by hypothesis the existence of a solution, they are called a priori estimates.
In the theory of partial differential equations, Holmgren's uniqueness theorem, or simply Holmgren's theorem, named after the Swedish mathematician Erik Albert Holmgren (1873–1943), is a uniqueness result for linear partial differential equations with real analytic coefficients.
Vasiliev equations are formally consistent gauge invariant nonlinear equations whose linearization over a specific vacuum solution describes free massless higher-spin fields on anti-de Sitter space. The Vasiliev equations are classical equations and no Lagrangian is known that starts from canonical two-derivative Frønsdal Lagrangian and is completed by interactions terms. There is a number of variations of Vasiliev equations that work in three, four and arbitrary number of space-time dimensions. Vasiliev's equations admit supersymmetric extensions with any number of super-symmetries and allow for Yang–Mills gaugings. Vasiliev's equations are background independent, the simplest exact solution being anti-de Sitter space. It is important to note that locality is not properly implemented and the equations give a solution of certain formal deformation procedure, which is difficult to map to field theory language. The higher-spin AdS/CFT correspondence is reviewed in Higher-spin theory article.
The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.