In the mathematical fields of differential equations and geometric analysis, the maximum principle is one of the most useful and best known tools of study. Solutions of a differential inequality in a domain D satisfy the maximum principle if they achieve their maxima at the boundary of D.
The maximum principle enables one to obtain information about solutions of differential equations without any explicit knowledge of the solutions themselves. In particular, the maximum principle is a useful tool in the numerical approximation of solutions of ordinary and partial differential equations and in the determination of bounds for the errors in such approximations. [1]
In a simple two-dimensional case, consider a function of two variables u(x,y) such that
The weak maximum principle, in this setting, says that for any open precompact subset M of the domain of u, the maximum of u on the closure of M is achieved on the boundary of M. The strong maximum principle says that, unless u is a constant function, the maximum cannot also be achieved anywhere on M itself.
Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient. There is no single or most general maximum principle which applies to all situations at once.
In the field of convex optimization, there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary. [2]
Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let M be an open subset of Euclidean space and let u be a C2 function on M such that
where for each i and j between 1 and n, aij is a function on M with aij = aji.
Fix some choice of x in M. According to the spectral theorem of linear algebra, all eigenvalues of the matrix [aij(x)] are real, and there is an orthonormal basis of ℝn consisting of eigenvectors. Denote the eigenvalues by λi and the corresponding eigenvectors by vi, for i from 1 to n. Then the differential equation, at the point x, can be rephrased as
The essence of the maximum principle is the simple observation that if each eigenvalue is positive (which amounts to a certain formulation of "ellipticity" of the differential equation) then the above equation imposes a certain balancing of the directional second derivatives of the solution. In particular, if one of the directional second derivatives is negative, then another must be positive. At a hypothetical point where u is maximized, all directional second derivatives are automatically nonpositive, and the "balancing" represented by the above equation then requires all directional second derivatives to be identically zero.
This elementary reasoning could be argued to represent an infinitesimal formulation of the strong maximum principle, which states, under some extra assumptions (such as the continuity of a), that u must be constant if there is a point of M where u is maximized.
Note that the above reasoning is unaffected if one considers the more general partial differential equation
since the added term is automatically zero at any hypothetical maximum point. The reasoning is also unaffected if one considers the more general condition
in which one can even note the extra phenomena of having an outright contradiction if there is a strict inequality (> rather than ≥) in this condition at the hypothetical maximum point. This phenomenon is important in the formal proof of the classical weak maximum principle.
However, the above reasoning no longer applies if one considers the condition
since now the "balancing" condition, as evaluated at a hypothetical maximum point of u, only says that a weighted average of manifestly nonpositive quantities is nonpositive. This is trivially true, and so one cannot draw any nontrivial conclusion from it. This is reflected by any number of concrete examples, such as the fact that
and on any open region containing the origin, the function −x2−y2 certainly has a maximum.
Let M denote an open subset of Euclidean space. If a smooth function is maximized at a point p, then one automatically has:
One can view a partial differential equation as the imposition of an algebraic relation between the various derivatives of a function. So, if u is the solution of a partial differential equation, then it is possible that the above conditions on the first and second derivatives of u form a contradiction to this algebraic relation. This is the essence of the maximum principle. Clearly, the applicability of this idea depends strongly on the particular partial differential equation in question.
For instance, if u solves the differential equation
then it is clearly impossible to have and at any point of the domain. So, following the above observation, it is impossible for u to take on a maximum value. If, instead u solved the differential equation then one would not have such a contradiction, and the analysis given so far does not imply anything interesting. If u solved the differential equation then the same analysis would show that u cannot take on a minimum value.
The possibility of such analysis is not even limited to partial differential equations. For instance, if is a function such that
which is a sort of "non-local" differential equation, then the automatic strict positivity of the right-hand side shows, by the same analysis as above, that u cannot attain a maximum value.
There are many methods to extend the applicability of this kind of analysis in various ways. For instance, if u is a harmonic function, then the above sort of contradiction does not directly occur, since the existence of a point p where is not in contradiction to the requirement everywhere. However, one could consider, for an arbitrary real number s, the function us defined by
It is straightforward to see that
By the above analysis, if then us cannot attain a maximum value. One might wish to consider the limit as s to 0 in order to conclude that u also cannot attain a maximum value. However, it is possible for the pointwise limit of a sequence of functions without maxima to have a maxima. Nonetheless, if M has a boundary such that M together with its boundary is compact, then supposing that u can be continuously extended to the boundary, it follows immediately that both u and us attain a maximum value on Since we have shown that us, as a function on M, does not have a maximum, it follows that the maximum point of us, for any s, is on By the sequential compactness of it follows that the maximum of u is attained on This is the weak maximum principle for harmonic functions. This does not, by itself, rule out the possibility that the maximum of u is also attained somewhere on M. That is the content of the "strong maximum principle," which requires further analysis.
The use of the specific function above was very inessential. All that mattered was to have a function which extends continuously to the boundary and whose Laplacian is strictly positive. So we could have used, for instance,
with the same effect.
Let M be an open subset of Euclidean space. Let be a twice-differentiable function which attains its maximum value C. Suppose that
Suppose that one can find (or prove the existence of):
Then L(u + h − C) ≥ 0 on Ω with u + h − C ≤ 0 on the boundary of Ω; according to the weak maximum principle, one has u + h − C ≤ 0 on Ω. This can be reorganized to say
for all x in Ω. If one can make the choice of h so that the right-hand side has a manifestly positive nature, then this will provide a contradiction to the fact that x0 is a maximum point of u on M, so that its gradient must vanish.
The above "program" can be carried out. Choose Ω to be a spherical annulus; one selects its center xc to be a point closer to the closed set u−1(C) than to the closed set ∂M, and the outer radius R is selected to be the distance from this center to u−1(C); let x0 be a point on this latter set which realizes the distance. The inner radius ρ is arbitrary. Define
Now the boundary of Ω consists of two spheres; on the outer sphere, one has h = 0; due to the selection of R, one has u ≤ C on this sphere, and so u + h − C ≤ 0 holds on this part of the boundary, together with the requirement h(x0) = 0. On the inner sphere, one has u < C. Due to the continuity of u and the compactness of the inner sphere, one can select δ > 0 such that u + δ < C. Since h is constant on this inner sphere, one can select ε > 0 such that u + h ≤ C on the inner sphere, and hence on the entire boundary of Ω.
Direct calculation shows
There are various conditions under which the right-hand side can be guaranteed to be nonnegative; see the statement of the theorem below.
Lastly, note that the directional derivative of h at x0 along the inward-pointing radial line of the annulus is strictly positive. As described in the above summary, this will ensure that a directional derivative of u at x0 is nonzero, in contradiction to x0 being a maximum point of u on the open set M.
The following is the statement of the theorem in the books of Morrey and Smoller, following the original statement of Hopf (1927):
Let M be an open subset of Euclidean space ℝn. For each i and j between 1 and n, let aij and bi be continuous functions on M with aij = aji. Suppose that for all x in M, the symmetric matrix [aij] is positive-definite. If u is a nonconstant C2 function on M such that
on M, then u does not attain a maximum value on M.
The point of the continuity assumption is that continuous functions are bounded on compact sets, the relevant compact set here being the spherical annulus appearing in the proof. Furthermore, by the same principle, there is a number λ such that for all x in the annulus, the matrix [aij(x)] has all eigenvalues greater than or equal to λ. One then takes α, as appearing in the proof, to be large relative to these bounds. Evans's book has a slightly weaker formulation, in which there is assumed to be a positive number λ which is a lower bound of the eigenvalues of [aij] for all x in M.
These continuity assumptions are clearly not the most general possible in order for the proof to work. For instance, the following is Gilbarg and Trudinger's statement of the theorem, following the same proof:
Let M be an open subset of Euclidean space ℝn. For each i and j between 1 and n, let aij and bi be functions on M with aij = aji. Suppose that for all x in M, the symmetric matrix [aij] is positive-definite, and let λ(x) denote its smallest eigenvalue. Suppose that and are bounded functions on M for each i between 1 and n. If u is a nonconstant C2 function on M such that
on M, then u does not attain a maximum value on M.
One cannot naively extend these statements to the general second-order linear elliptic equation, as already seen in the one-dimensional case. For instance, the ordinary differential equation y″ + 2y = 0 has sinusoidal solutions, which certainly have interior maxima. This extends to the higher-dimensional case, where one often has solutions to "eigenfunction" equations Δu + cu = 0 which have interior maxima. The sign of c is relevant, as also seen in the one-dimensional case; for instance the solutions to y″ - 2y = 0 are exponentials, and the character of the maxima of such functions is quite different from that of sinusoidal functions.
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function where U is an open subset of that satisfies Laplace's equation, that is,
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as
In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of r vector fields mesh into coordinate grids on r-dimensional integral manifolds. The theorem is foundational in differential topology and calculus on manifolds.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.
The work of a force on a particle along a virtual displacement is known as the virtual work.
In mathematics, Harnack's inequality is an inequality relating the values of a positive harmonic function at two points, introduced by A. Harnack. Harnack's inequality is used to prove Harnack's theorem about the convergence of sequences of harmonic functions. J. Serrin, and J. Moser generalized Harnack's inequality to solutions of elliptic or parabolic partial differential equations. Such results can be used to show the interior regularity of weak solutions.
An eikonal equation is a non-linear first-order partial differential equation that is encountered in problems of wave propagation.
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the stable state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on.
In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.
In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics. Indeed, the solutions of such problems may involve strong gradients (and even discontinuities) so that classical finite element methods fail, while finite volume methods are restricted to low order approximations.
The Hopf maximum principle is a maximum principle in the theory of second order elliptic partial differential equations and has been described as the "classic and bedrock result" of that theory. Generalizing the maximum principle for harmonic functions which was already known to Gauss in 1839, Eberhard Hopf proved in 1927 that if a function satisfies a second order partial differential inequality of a certain kind in a domain of Rn and attains a maximum in the domain then the function is constant. The simple idea behind Hopf's proof, the comparison technique he introduced for this purpose, has led to an enormous range of important applications and generalizations.
In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation.
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
In mathematics, the Hopf lemma, named after Eberhard Hopf, states that if a continuous real-valued function in a domain in Euclidean space with sufficiently smooth boundary is harmonic in the interior and the value of the function at a point on the boundary is greater than the values at nearby points inside the domain, then the derivative of the function in the direction of the outward pointing normal is strictly positive. The lemma is an important tool in the proof of the maximum principle and in the theory of partial differential equations. The Hopf lemma has been generalized to describe the behavior of the solution to an elliptic problem as it approaches a point on the boundary where its maximum is attained.