In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
Sobolev spaces are named after the Russian mathematician Sergei Sobolev. Their importance comes from the fact that weak solutions of some important partial differential equations exist in appropriate Sobolev spaces, even when there are no strong solutions in spaces of continuous functions with the derivatives understood in the classical sense.
In this section and throughout the article is an open subset of
There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be of class — see Differentiability classes). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, however, it was observed that the space (or , etc.) was not exactly the right space to study solutions of differential equations. The Sobolev spaces are the modern replacement for these spaces in which to look for solutions of partial differential equations.
Quantities or properties of the underlying model of the differential equation are usually expressed in terms of integral norms, rather than the uniform norm. A typical example is measuring the energy of a temperature or velocity distribution by an -norm. It is therefore important to develop a tool for differentiating Lebesgue space functions.
The integration by parts formula yields that for every , where is a natural number, and for all infinitely differentiable functions with compact support
where is a multi-index of order and we are using the notation:
The left-hand side of this equation still makes sense if we only assume to be locally integrable. If there exists a locally integrable function , such that
then we call the weak -th partial derivative of . If there exists a weak -th partial derivative of , then it is uniquely defined almost everywhere, and thus it is uniquely determined as an element of a Lebesgue space. On the other hand, if , then the classical and the weak derivative coincide. Thus, if is a weak -th partial derivative of , we may denote it by .
For example, the function
is not continuous at zero, and not differentiable at −1, 0, or 1. Yet the function
satisfies the definition for being the weak derivative of which then qualifies as being in the Sobolev space (for any allowed , see definition below).
The Sobolev spaces combine the concepts of weak differentiability and Lebesgue norms.
In the one-dimensional case the Sobolev space for is defined as the subset of functions in such that and its weak derivatives up to order have a finite Lp norm. As mentioned above, some care must be taken to define derivatives in the proper sense. In the one-dimensional problem it is enough to assume that the -th derivative is differentiable almost everywhere and is equal almost everywhere to the Lebesgue integral of its derivative (this excludes irrelevant examples such as Cantor's function).
With this definition, the Sobolev spaces admit a natural norm,
One can extend this to the case , with the norm then defined using the essential supremum by
Equipped with the norm becomes a Banach space. It turns out that it is enough to take only the first and last in the sequence, i.e., the norm defined by
is equivalent to the norm above (i.e. the induced topologies of the norms are the same).
Sobolev spaces with p = 2 are especially important because of their connection with Fourier series and because they form a Hilbert space. A special notation has arisen to cover this case, since the space is a Hilbert space:
The space can be defined naturally in terms of Fourier series whose coefficients decay sufficiently rapidly, namely,
where is the Fourier series of and denotes the 1-torus. As above, one can use the equivalent norm
Both representations follow easily from Parseval's theorem and the fact that differentiation is equivalent to multiplying the Fourier coefficient by in.
Furthermore, the space admits an inner product, like the space In fact, the inner product is defined in terms of the inner product:
The space becomes a Hilbert space with this inner product.
In one dimension, some other Sobolev spaces permit a simpler description. For example, is the space of absolutely continuous functions on (0, 1) (or rather, equivalence classes of functions that are equal almost everywhere to such), while is the space of Lipschitz functions on I, for every interval I. However, these properties are lost or not as simple for functions of more than one variable.
All spaces are (normed) algebras, i.e. the product of two elements is once again a function of this Sobolev space, which is not the case for (E.g., functions behaving like |x|−1/3 at the origin are in but the product of two such functions is not in ).
The transition to multiple dimensions brings more difficulties, starting from the very definition. The requirement that be the integral of does not generalize, and the simplest solution is to consider derivatives in the sense of distribution theory.
A formal definition now follows. Let The Sobolev space is defined to be the set of all functions on such that for every multi-index with the mixed partial derivative
exists in the weak sense and is in i.e.
That is, the Sobolev space is defined as
The natural number is called the order of the Sobolev space
There are several choices for a norm for The following two are common and are equivalent in the sense of equivalence of norms:
With respect to either of these norms, is a Banach space. For is also a separable space. It is conventional to denote by for it is a Hilbert space with the norm .
It is rather hard to work with Sobolev spaces relying only on their definition. It is therefore interesting to know that by theorem of Meyers and Serrin a function can be approximated by smooth functions. This fact often allows us to translate properties of smooth functions to Sobolev functions. If is finite and is open, then there exists for any an approximating sequence of functions such that:
If has Lipschitz boundary, we may even assume that the are the restriction of smooth functions with compact support on all of
In higher dimensions, it is no longer true that, for example, contains only continuous functions. For example, where is the unit ball in three dimensions. For k > n/p the space will contain only continuous functions, but for which k this is already true depends both on p and on the dimension. For example, as can be easily checked using spherical polar coordinates for the function defined on the n-dimensional ball we have:
Intuitively, the blow-up of f at 0 "counts for less" when n is large since the unit ball has "more outside and less inside" in higher dimensions.
Let If a function is in then, possibly after modifying the function on a set of measure zero, the restriction to almost every line parallel to the coordinate directions in is absolutely continuous; what's more, the classical derivative along the lines that are parallel to the coordinate directions are in Conversely, if the restriction of to almost every line parallel to the coordinate directions is absolutely continuous, then the pointwise gradient exists almost everywhere, and is in provided In particular, in this case the weak partial derivatives of and pointwise partial derivatives of agree almost everywhere. The ACL characterization of the Sobolev spaces was established by Otto M. Nikodym (1933); see ( Maz'ya 1985 , §1.1.3) harv error: no target: CITEREFMaz'ya1985 (help).
A stronger result holds when A function in is, after modifying on a set of measure zero, Hölder continuous of exponent by Morrey's inequality. In particular, if then the function is Lipschitz continuous.
The Sobolev space is also denoted by It is a Hilbert space, with an important subspace defined to be the closure of the infinitely differentiable functions compactly supported in in The Sobolev norm defined above reduces here to
When has a regular boundary, can be described as the space of functions in that vanish at the boundary, in the sense of traces (see below). When if is a bounded interval, then consists of continuous functions on of the form
where the generalized derivative is in and has 0 integral, so that
When is bounded, the Poincaré inequality states that there is a constant such that:
When is bounded, the injection from to is compact. This fact plays a role in the study of the Dirichlet problem, and in the fact that there exists an orthonormal basis of consisting of eigenvectors of the Laplace operator (with Dirichlet boundary condition).
Sobolev spaces are often considered when investigating partial differential equations. It is essential to consider boundary values of Sobolev functions. If , those boundary values are described by the restriction . However, it is not clear how to describe values at the boundary for , as the n-dimensional measure of the boundary is zero. The following theorem resolves the problem:
Tu is called the trace of u. Roughly speaking, this theorem extends the restriction operator to the Sobolev space for well-behaved Ω. Note that the trace operator T is in general not surjective, but for 1 < p < ∞ it maps continuously onto the Sobolev-Slobodeckij space
Intuitively, taking the trace costs 1/p of a derivative. The functions u in W1,p(Ω) with zero trace, i.e. Tu = 0, can be characterized by the equality
In other words, for Ω bounded with Lipschitz boundary, trace-zero functions in can be approximated by smooth functions with compact support.
For a natural number k and 1 < p < ∞ one can show (by using Fourier multipliers ) that the space can equivalently be defined as
with the norm
This motivates Sobolev spaces with non-integer order since in the above definition we can replace k by any real number s. The resulting spaces
are called Bessel potential spaces(named after Friedrich Bessel). They are Banach spaces in general and Hilbert spaces in the special case p = 2.
For is the set of restrictions of functions from to Ω equipped with the norm
Again, Hs,p(Ω) is a Banach space and in the case p = 2 a Hilbert space.
Using extension theorems for Sobolev spaces, it can be shown that also Wk,p(Ω) = Hk,p(Ω) holds in the sense of equivalent norms, if Ω is domain with uniform Ck-boundary, k a natural number and 1 < p < ∞. By the embeddings
the Bessel potential spaces form a continuous scale between the Sobolev spaces From an abstract point of view, the Bessel potential spaces occur as complex interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms it holds that
Another approach to define fractional order Sobolev spaces arises from the idea to generalize the Hölder condition to the Lp-setting. and the Slobodeckij seminorm (roughly analogous to the Hölder seminorm) is defined byFor
Let s > 0 be not an integer and set . Using the same idea as for the Hölder spaces, the Sobolev–Slobodeckij space is defined as
It is a Banach space for the norm
If is suitably regular in the sense that there exist certain extension operators, then also the Sobolev–Slobodeckij spaces form a scale of Banach spaces, i.e. one has the continuous injections or embeddings
There are examples of irregular Ω such that is not even a vector subspace of for 0 < s < 1 (see Example 9.1 of )
From an abstract point of view, the spaces coincide with the real interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms the following holds:
Sobolev–Slobodeckij spaces play an important role in the study of traces of Sobolev functions. They are special cases of Besov spaces.
If is a domain whose boundary is not too poorly behaved (e.g., if its boundary is a manifold, or satisfies the more permissive "cone condition") then there is an operator A mapping functions of to functions of such that:
We will call such an operator A an extension operator for
Extension operators are the most natural way to define for non-integer s (we cannot work directly on since taking Fourier transform is a global operation). We define by saying that if and only if Equivalently, complex interpolation yields the same spaces so long as has an extension operator. If does not have an extension operator, complex interpolation is the only way to obtain the spaces.
As a result, the interpolation inequality still holds.
Like above, we define to be the closure in of the space of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following
If we may define its extension by zero in the natural way, namely
For f ∈ Lp(Ω) its extension by zero,
is an element of Furthermore,
In the case of the Sobolev space W1,p(Ω) for 1 ≤ p ≤ ∞, extending a function u by zero will not necessarily yield an element of But if Ω is bounded with Lipschitz boundary (e.g. ∂Ω is C1), then for any bounded open set O such that Ω⊂⊂O (i.e. Ω is compactly contained in O), there exists a bounded linear operator
such that for each a.e. on Ω, Eu has compact support within O, and there exists a constant C depending only on p, Ω, O and the dimension n, such that
We call Eu an extension of u to
It is a natural question to ask if a Sobolev function is continuous or even continuously differentiable. Roughly speaking, sufficiently many weak derivatives (i.e. large k) result in a classical derivative. This idea is generalized and made precise in the Sobolev embedding theorem.
Write for the Sobolev space of some compact Riemannian manifold of dimension n. Here k can be any real number, and 1 ≤ p ≤ ∞. (For p = ∞ the Sobolev space is defined to be the Hölder space Cn,α where k = n + α and 0 < α ≤ 1.) The Sobolev embedding theorem states that if and then
and the embedding is continuous. Moreover, if and then the embedding is completely continuous (this is sometimes called Kondrachov's theorem or the Rellich-Kondrachov theorem). Functions in have all derivatives of order less than m continuous, so in particular this gives conditions on Sobolev spaces for various derivatives to be continuous. Informally these embeddings say that to convert an Lp estimate to a boundedness estimate costs 1/p derivatives per dimension.
There are similar variations of the embedding theorem for non-compact manifolds such as ( Stein 1970 ). Sobolev embeddings on that are not compact often have a related, but weaker, property of cocompactness.
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.
In mathematics, de Rham cohomology is a tool belonging both to algebraic topology and to differential topology, capable of expressing basic topological information about smooth manifolds in a form particularly adapted to computation and the concrete representation of cohomology classes. It is a cohomology theory based on the existence of differential forms with prescribed properties.
In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.
In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.
In mathematical analysis, Trudinger's theorem or the Trudinger inequality is a result of functional analysis on Sobolev spaces. It is named after Neil Trudinger.
In complex analysis, functional analysis and operator theory, a Bergman space is a function space of holomorphic functions in a domain D of the complex plane that are sufficiently well-behaved at the boundary that they are absolutely integrable. Specifically, for 0 < p < ∞, the Bergman space Ap(D) is the space of all holomorphic functions in D for which the p-norm is finite:
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants C, α>0, such that
In mathematics, Friedrichs's inequality is a theorem of functional analysis, due to Kurt Friedrichs. It places a bound on the Lp norm of a function using Lp bounds on the weak derivatives of the function and the geometry of the domain, and can be used to show that certain norms on Sobolev spaces are equivalent. Friedrichs's inequality is a general case of the Poincaré–Wirtinger inequality which deals with the case k = 1.
In mathematics, Gårding's inequality is a result that gives a lower bound for the bilinear form induced by a real linear elliptic partial differential operator. The inequality is named after Lars Gårding.
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the stable state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on.
In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions, where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function by a polynomial of order at most in terms of derivatives of of order . Both the error of the approximation and the derivatives of are measured by norms on a bounded domain in . This is similar to classical numerical analysis, where, for example, the error of linear interpolation can be bounded using the second derivative of . However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of are measured by more general norms involving averages, not just the maximum norm.
In mathematics, the p-Laplacian, or the p-Laplace operator, is a quasilinear elliptic partial differential operator of 2nd order. It is a nonlinear generalization of the Laplace operator, where is allowed to range over . It is written as
In mathematics, the Besov space is a complete quasinormed space which is a Banach space when 1 ≤ p, q ≤ ∞. These spaces, as well as the similarly defined Triebel–Lizorkin spaces, serve to generalize more elementary function spaces such as Sobolev spaces and are effective at measuring regularity properties of functions.
In mathematics, the direct method in the calculus of variations is a general method for constructing a proof of the existence of a minimizer for a given functional, introduced by Zaremba and David Hilbert around 1900. The method relies on methods of functional analysis and topology. As well as being used to prove the existence of a solution, direct methods may be used to compute the solution to desired accuracy.
In mathematics, there are two different notions of semi-inner-product. The first, and more common, is that of an inner product which is not required to be strictly positive. This article will deal with the second, called a L-semi-inner product or semi-inner product in the sense of Lumer. which is an inner product not required to be conjugate symmetric. It was formulated by Günter Lumer, for the purpose of extending Hilbert space type arguments to Banach spaces in functional analysis. Fundamental properties were later explored by Giles.
In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.
In mathematics, Ladyzhenskaya's inequality is any of a number of related functional inequalities named after the Soviet Russian mathematician Olga Aleksandrovna Ladyzhenskaya. The original such inequality, for functions of two real variables, was introduced by Ladyzhenskaya in 1958 to prove the existence and uniqueness of long-time solutions to the Navier–Stokes equations in two spatial dimensions. There is an analogous inequality for functions of three real variables, but the exponents are slightly different; much of the difficulty in establishing existence and uniqueness of solutions to the three-dimensional Navier–Stokes equations stems from these different exponents. Ladyzhenskaya's inequality is one member of a broad class of inequalities known as interpolation inequalities.
In mathematics the symmetrization methods are algorithms of transforming a set to a ball with equal volume and centered at the origin. B is called the symmetrized version of A, usually denoted . These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter. The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method. From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball. Another problem is that the Newtonian capacity of a set A is minimized by and this was proved by Polya and G. Szego (1951) using circular symmetrization.
In the mathematical discipline of functional analysis, it is possible to generalize the notion of derivative to arbitrary topological vector spaces (TVSs) in multiple ways. But when the domain of a TVS-value function is a subset of finite-dimensional Euclidean space then the number of generalizations of the derivative is much more limited and derivatives are more well behaved. This article presents the theory of -times continuously differentiable functions on an open subset of Euclidean space , which is an important special case of differentiation between arbitrary TVSs. All vector spaces will be assumed to be over the field where is either the real numbers or the complex numbers