In mathematics, and specifically the field of partial differential equations (PDEs), a parametrix is an approximation to a fundamental solution of a PDE, and is essentially an approximate inverse to a differential operator.
A parametrix for a differential operator is often easier to construct than a fundamental solution, and for many purposes is almost as good. It is sometimes possible to construct a fundamental solution from a parametrix by iteratively improving it.
It is useful to review what a fundamental solution for a differential operator P(D) with constant coefficients is: it is a distribution u on such that
in the weak sense, where δ is the Dirac delta distribution.
In a similar way, a parametrix for a variable coefficient differential operator P(x,D) is a distribution u such that
where ω is some C∞ function with compact support.
The parametrix is a useful concept in the study of elliptic differential operators and, more generally, of hypoelliptic pseudodifferential operators with variable coefficient, since for such operators over appropriate domains a parametrix can be shown to exist, can be somewhat easily constructed [1] and be a smooth function away from the origin. [2]
Having found the analytic expression of the parametrix, it is possible to compute the solution of the associated fairly general elliptic partial differential equation by solving an associated Fredholm integral equation: also, the structure itself of the parametrix reveals properties of the solution of the problem without even calculating it, like its smoothness [3] and other qualitative properties.
More generally, if L is any pseudodifferential operator of order p, then another pseudodifferential operator L+ of order –p is called a parametrix for L if the operators
are both pseudodifferential operators of negative order. The operators L and L+ will admit continuous extensions to maps between the Sobolev spaces Hs and Hs+k.
On a compact manifold, the differences above are compact operators. In this case the original operator L defines a Fredholm operator between the Sobolev spaces. [4]
An explicit construction of a parametrix for second order partial differential operators based on power series developments was discovered by Jacques Hadamard. It can be applied to the Laplace operator, the wave equation and the heat equation.
In the case of the heat equation or the wave equation, where there is a distinguished time parameter t, Hadamard's method consists in taking the fundamental solution of the constant coefficient differential operator obtained freezing the coefficients at a fixed point and seeking a general solution as a product of this solution, as the point varies, by a formal power series in t. The constant term is 1 and the higher coefficients are functions determined recursively as integrals in a single variable.
In general, the power series will not converge but will provide only an asymptotic expansion of the exact solution. A suitable truncation of the power series then yields a parametrix. [5] [6]
A sufficiently good parametrix can often be used to construct an exact fundamental solution by a convergent iterative procedure as follows ( Berger, Gauduchon & Mazet 1971 ).
If L is an element of a ring with multiplication * such that
for some approximate right inverse P and "sufficiently small" remainder term R then, at least formally,
so if the infinite series makes sense then L has a right inverse
If L is a pseudo-differential operator and P is a parametrix, this gives a right inverse to L, in other words a fundamental solution, provided that R is "small enough" which in practice means that it should be a sufficiently good smoothing operator.
If P and R are represented by functions, then the multiplication * of pseudo-differential operators corresponds to convolution of functions, so the terms of the infinite sum giving the fundamental solution of L involve convolution of P with copies of R.
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index is equal to the topological index. It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics.
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
In mathematics, a fundamental solution for a linear partial differential operator L is a formulation in the language of distribution theory of the older idea of a Green's function.
In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.
Hilbert's nineteenth problem is one of the 23 Hilbert problems, set out in a list compiled by David Hilbert in 1900. It asks whether the solutions of regular problems in the calculus of variations are always analytic. Informally, and perhaps less directly, since Hilbert's concept of a "regular variational problem" identifies this precisely as a variational problem whose Euler–Lagrange equation is an elliptic partial differential equation with analytic coefficients, Hilbert's nineteenth problem, despite its seemingly technical statement, simply asks whether, in this class of partial differential equations, any solution inherits the relatively simple and well understood property of being an analytic function from the equation it satisfies. Hilbert's nineteenth problem was solved independently in the late 1950s by Ennio De Giorgi and John Forbes Nash, Jr.
In mathematics, a mixed boundary condition for a partial differential equation defines a boundary value problem in which the solution of the given equation is required to satisfy different boundary conditions on disjoint parts of the boundary of the domain where the condition is stated. Precisely, in a mixed boundary value problem, the solution is required to satisfy a Dirichlet or a Neumann boundary condition in a mutually exclusive way on disjoint parts of the boundary.
In mathematics, a locally integrable function is a function which is integrable on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain : in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.
In mathematics, the inverse scattering transform is a method that solves the initial value problem for a nonlinear partial differential equation using mathematical methods related to wave scattering. The direct scattering transform describes how a function scatters waves or generates bound-states. The inverse scattering transform uses wave scattering data to construct the function responsible for wave scattering. The direct and inverse scattering transforms are analogous to the direct and inverse Fourier transforms which are used to solve linear partial differential equations.
In Riemannian geometry, a branch of mathematics, harmonic coordinates are a certain kind of coordinate chart on a smooth manifold, determined by a Riemannian metric on the manifold. They are useful in many problems of geometric analysis due to their regularity properties.
In mathematics, Weyl's lemma, named after Hermann Weyl, states that every weak solution of Laplace's equation is a smooth solution. This contrasts with the wave equation, for example, which has weak solutions that are not smooth solutions. Weyl's lemma is a special case of elliptic or hypoelliptic regularity.
In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. The class of Fourier integral operators contains differential operators as well as classical integral operators as special cases.
In mathematics, specifically in differential geometry, isothermal coordinates on a Riemannian manifold are local coordinates where the metric is conformal to the Euclidean metric. This means that in isothermal coordinates, the Riemannian metric locally has the form
Eugenio Elia Levi was an Italian mathematician, known for his fundamental contributions in group theory, in the theory of partial differential operators and in the theory of functions of several complex variables. He was a younger brother of Beppo Levi and was killed in action during First World War.
In mathematics, the Malgrange–Ehrenpreis theorem states that every non-zero linear differential operator with constant coefficients has a Green's function. It was first proved independently by Leon Ehrenpreis and Bernard Malgrange.
In the theory of functions of several complex variables, Hartogs's extension theorem is a statement about the singularities of holomorphic functions of several variables. Informally, it states that the support of the singularities of such functions cannot be compact, therefore the singular set of a function of several complex variables must 'go off to infinity' in some direction. More precisely, it shows that an isolated singularity is always a removable singularity for any analytic function of n > 1 complex variables. A first version of this theorem was proved by Friedrich Hartogs, and as such it is known also as Hartogs's lemma and Hartogs's principle: in earlier Soviet literature, it is also called the Osgood–Brown theorem, acknowledging later work by Arthur Barton Brown and William Fogg Osgood. This property of holomorphic functions of several variables is also called Hartogs's phenomenon: however, the locution "Hartogs's phenomenon" is also used to identify the property of solutions of systems of partial differential or convolution equations satisfying Hartogs-type theorems.
In complex analysis of one and several complex variables, Wirtinger derivatives, named after Wilhelm Wirtinger who introduced them in 1927 in the course of his studies on the theory of functions of several complex variables, are partial differential operators of the first order which behave in a very similar manner to the ordinary derivatives with respect to one real variable, when applied to holomorphic functions, antiholomorphic functions or simply differentiable functions on complex domains. These operators permit the construction of a differential calculus for such functions that is entirely analogous to the ordinary differential calculus for functions of real variables.
In scientific computation and simulation, the method of fundamental solutions (MFS) is a technique for solving partial differential equations based on using the fundamental solution as a basis function. The MFS was developed to overcome the major drawbacks in the boundary element method (BEM) which also uses the fundamental solution to satisfy the governing equation. Consequently, both the MFS and the BEM are of a boundary discretization numerical technique and reduce the computational complexity by one dimensionality and have particular edge over the domain-type numerical techniques such as the finite element and finite volume methods on the solution of infinite domain, thin-walled structures, and inverse problems.