This article needs additional citations for verification .(June 2022) |
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the steady state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on.
Differential equations describe a large class of natural phenomena, from the heat equation describing the evolution of heat in (for instance) a metal plate, to the Navier-Stokes equation describing the movement of fluids, including Einstein's equations describing the physical universe in a relativistic way. Although all these equations are boundary value problems, they are further subdivided into categories. This is necessary because each category must be analyzed using different techniques. The present article deals with the category of boundary value problems known as linear elliptic problems.
Boundary value problems and partial differential equations specify relations between two or more quantities. For instance, in the heat equation, the rate of change of temperature at a point is related to the difference of temperature between that point and the nearby points so that, over time, the heat flows from hotter points to cooler points. Boundary value problems can involve space, time and other quantities such as temperature, velocity, pressure, magnetic field, etc.
Some problems do not involve time. For instance, if one hangs a clothesline between the house and a tree, then in the absence of wind, the clothesline will not move and will adopt a gentle hanging curved shape known as the catenary. [1] This curved shape can be computed as the solution of a differential equation relating position, tension, angle and gravity, but since the shape does not change over time, there is no time variable.
Elliptic boundary value problems are a class of problems which do not involve the time variable, and instead only depend on space variables.
In two dimensions, let be the coordinates. We will use the notation for the first and second partial derivatives of with respect to , and a similar notation for . We will use the symbols and for the partial differential operators in and . The second partial derivatives will be denoted and . We also define the gradient , the Laplace operator and the divergence . Note from the definitions that .
The main example for boundary value problems is the Laplace operator,
where is a region in the plane and is the boundary of that region. The function is known data and the solution is what must be computed. This example has the same essential properties as all other elliptic boundary value problems.
The solution can be interpreted as the stationary or limit distribution of heat in a metal plate shaped like , if this metal plate has its boundary adjacent to ice (which is kept at zero degrees, thus the Dirichlet boundary condition.) The function represents the intensity of heat generation at each point in the plate (perhaps there is an electric heater resting on the metal plate, pumping heat into the plate at rate , which does not vary over time, but may be nonuniform in space on the metal plate.) After waiting for a long time, the temperature distribution in the metal plate will approach .
Let where and are constants. is called a second order differential operator. If we formally replace the derivatives by and by , we obtain the expression
If we set this expression equal to some constant , then we obtain either an ellipse (if are all the same sign) or a hyperbola (if and are of opposite signs.) For that reason, is said to be elliptic when and hyperbolic if . Similarly, the operator leads to a parabola, and so this is said to be parabolic.
We now generalize the notion of ellipticity. While it may not be obvious that our generalization is the right one, it turns out that it does preserve most of the necessary properties for the purpose of analysis.
Let be the space variables. Let be real valued functions of . Let be a second degree linear operator. That is,
We have used the subscript to denote the partial derivative with respect to the space variable . The two formulae are equivalent, provided that
In matrix notation, we can let be an matrix valued function of and be a -dimensional column vector-valued function of , and then we may write
One may assume, without loss of generality, that the matrix is symmetric (that is, for all , . We make that assumption in the rest of this article.
We say that the operator is elliptic if, for some constant , any of the following equivalent conditions hold:
An elliptic boundary value problem is then a system of equations like
This particular example is the Dirichlet problem. The Neumann problem is
where is the derivative of in the direction of the outwards pointing normal of . In general, if is any trace operator, one can construct the boundary value problem
In the rest of this article, we assume that is elliptic and that the boundary condition is the Dirichlet condition .
The analysis of elliptic boundary value problems requires some fairly sophisticated tools of functional analysis. We require the space , the Sobolev space of "once-differentiable" functions on , such that both the function and its partial derivatives , are all square integrable. There is a subtlety here in that the partial derivatives must be defined "in the weak sense" (see the article on Sobolev spaces for details.) The space is a Hilbert space, which accounts for much of the ease with which these problems are analyzed.
The discussion in details of Sobolev spaces is beyond the scope of this article, but we will quote required results as they arise.
Unless otherwise noted, all derivatives in this article are to be interpreted in the weak, Sobolev sense. We use the term "strong derivative" to refer to the classical derivative of calculus. We also specify that the spaces , consist of functions that are times strongly differentiable, and that the th derivative is continuous.
The first step to cast the boundary value problem as in the language of Sobolev spaces is to rephrase it in its weak form. Consider the Laplace problem . Multiply each side of the equation by a "test function" and integrate by parts using Green's theorem to obtain
We will be solving the Dirichlet problem, so that . For technical reasons, it is useful to assume that is taken from the same space of functions as is so we also assume that . This gets rid of the term, yielding
where
If is a general elliptic operator, the same reasoning leads to the bilinear form
We do not discuss the Neumann problem but note that it is analyzed in a similar way.
The map is defined on the Sobolev space of functions which are once differentiable and zero on the boundary , provided we impose some conditions on and . There are many possible choices, but for the purpose of this article, we will assume that
The reader may verify that the map is furthermore bilinear and continuous, and that the map is linear in , and continuous if (for instance) is square integrable.
We say that the map is coercive if there is an for all ,
This is trivially true for the Laplacian (with ) and is also true for an elliptic operator if we assume and . (Recall that when is elliptic.)
One may show, via the Lax–Milgram lemma, that whenever is coercive and is continuous, then there exists a unique solution to the weak problem (*).
If further is symmetric (i.e., ), one can show the same result using the Riesz representation theorem instead.
This relies on the fact that forms an inner product on , which itself depends on Poincaré's inequality.
We have shown that there is a which solves the weak system, but we do not know if this solves the strong system
Even more vexing is that we are not even sure that is twice differentiable, rendering the expressions in apparently meaningless. There are many ways to remedy the situation, the main one being regularity.
A regularity theorem for a linear elliptic boundary value problem of the second order takes the form
TheoremIf (some condition), then the solution is in , the space of "twice differentiable" functions whose second derivatives are square integrable.
There is no known simple condition necessary and sufficient for the conclusion of the theorem to hold, but the following conditions are known to be sufficient:
It may be tempting to infer that if is piecewise then is indeed in , but that is unfortunately false.
In the case that then the second derivatives of are defined almost everywhere, and in that case almost everywhere.
One may further prove that if the boundary of is a smooth manifold and is infinitely differentiable in the strong sense, then is also infinitely differentiable in the strong sense. In this case, with the strong definition of the derivative.
The proof of this relies upon an improved regularity theorem that says that if is and , , then , together with a Sobolev imbedding theorem saying that functions in are also in whenever .
While in exceptional circumstances, it is possible to solve elliptic problems explicitly, in general it is an impossible task. The natural solution is to approximate the elliptic problem with a simpler one and to solve this simpler problem on a computer.
Because of the good properties we have enumerated (as well as many we have not), there are extremely efficient numerical solvers for linear elliptic boundary value problems (see finite element method, finite difference method and spectral method for examples.)
Another Sobolev imbedding theorem states that the inclusion is a compact linear map. Equipped with the spectral theorem for compact linear operators, one obtains the following result.
TheoremAssume that is coercive, continuous and symmetric. The map from to is a compact linear map. It has a basis of eigenvectors and matching eigenvalues such that
If one has computed the eigenvalues and eigenvectors, then one may find the "explicit" solution of ,
via the formula
where
(See Fourier series.)
The series converges in . Implemented on a computer using numerical approximations, this is known as the spectral method.
Consider the problem
The reader may verify that the eigenvectors are exactly
with eigenvalues
The Fourier coefficients of can be looked up in a table, getting . Therefore,
yielding the solution
There are many variants of the maximum principle. We give a simple one.
Theorem.(Weak maximum principle.) Let , and assume that . Say that in . Then . In other words, the maximum is attained on the boundary.
A strong maximum principle would conclude that for all unless is constant.
In vector calculus and differential geometry the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the fundamental theorem of multivariate calculus.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation: where ∇2 is the Laplace operator, k2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.
In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue.
In mathematics, Weyl's lemma, named after Hermann Weyl, states that every weak solution of Laplace's equation is a smooth solution. This contrasts with the wave equation, for example, which has weak solutions that are not smooth solutions. Weyl's lemma is a special case of elliptic or hypoelliptic regularity.
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions, where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics. Indeed, the solutions of such problems may involve strong gradients (and even discontinuities) so that classical finite element methods fail, while finite volume methods are restricted to low order approximations.
In mathematics, the p-Laplacian, or the p-Laplace operator, is a quasilinear elliptic partial differential operator of 2nd order. It is a nonlinear generalization of the Laplace operator, where is allowed to range over . It is written as
In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries—like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts.
In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation.
In mathematics, a free boundary problem is a partial differential equation to be solved for both an unknown function and an unknown domain . The segment of the boundary of which is not known at the outset of the problem is the free boundary.
In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.
In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes.
In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.