Whitney extension theorem

Last updated

In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A. It is a result of Hassler Whitney.

Contents

Statement

A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure. The starting point, then, is an examination of the statement of Taylor's theorem.

Given a real-valued Cm function f(x) on Rn, Taylor's theorem asserts that for each a, x, yRn, there is a function Rα(x,y) approaching 0 uniformly as x,ya such that

 

 

 

 

(1)

where the sum is over multi-indices  α.

Let fα = Dαf for each multi-index α. Differentiating (1) with respect to x, and possibly replacing R as needed, yields

 

 

 

 

(2)

where Rα is o(|x  y|m|α|) uniformly as x,ya.

Note that ( 2 ) may be regarded as purely a compatibility condition between the functions fα which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f. It is this insight which facilitates the following statement:

Theorem. Suppose that fα are a collection of functions on a closed subset A of Rn for all multi-indices α with satisfying the compatibility condition ( 2 ) at all points x, y, and a of A. Then there exists a function F(x) of class Cm such that:

  1. F = f0 on A.
  2. DαF = fα on A.
  3. F is real-analytic at every point of Rn  A.

Proofs are given in the original paper of Whitney (1934), and in Malgrange (1967), Bierstone (1980) and Hörmander (1990).

Extension in a half space

Seeley (1964) proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space Rn,+ of points where xn ≥ 0 is a smooth function f on the interior xn for which the derivatives ∂αf extend to continuous functions on the half space. On the boundary xn = 0, f restricts to smooth function. By Borel's lemma, f can be extended to a smooth function on the whole of Rn. Since Borel's lemma is local in nature, the same argument shows that if is a (bounded or unbounded) domain in Rn with smooth boundary, then any smooth function on the closure of can be extended to a smooth function on Rn.

Seeley's result for a half line gives a uniform extension map

which is linear, continuous (for the topology of uniform convergence of functions and their derivatives on compacta) and takes functions supported in [0,R] into functions supported in [−R,R]

To define set [1]

where φ is a smooth function of compact support on R equal to 1 near 0 and the sequences (am), (bm) satisfy:

A solution to this system of equations can be obtained by taking and seeking an entire function

such that That such a function can be constructed follows from the Weierstrass theorem and Mittag-Leffler theorem. [2]

It can be seen directly by setting [3]

an entire function with simple zeros at The derivatives W '(2j) are bounded above and below. Similarly the function

meromorphic with simple poles and prescribed residues at

By construction

is an entire function with the required properties.

The definition for a half space in Rn by applying the operator R to the last variable xn. Similarly, using a smooth partition of unity and a local change of variables, the result for a half space implies the existence of an analogous extending map

for any domain in Rn with smooth boundary.

See also

Notes

  1. Bierstone 1980 , p. 143
  2. Ponnusamy & Silverman 2006 , pp. 442–443
  3. Chazarain & Piriou 1982

Related Research Articles

In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space.

In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms differential equations into algebraic equations and convolution into multiplication.

In vector calculus and differential geometry, the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. It is a generalization of Isaac Newton's fundamental theorem of calculus that relates two-dimensional line integrals to three-dimensional surface integrals.

Dirac delta function Pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is called a function, although it is not a function on the level one would expect, that is, it is not a function RC, but a function on the space of test functions. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

Taylors theorem Approximation of a function by a truncated power series

In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the kth-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order k of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

Analytic function Function locally given by a convergent power series

In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions. A function is analytic if and only if its Taylor series about x0 converges to the function in some neighborhood for every x0 in its domain.

In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint: for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. It allows for a natural, metric-independent generalization of Stokes' theorem, Gauss's theorem, and Green's theorem from vector calculus.

Differential operator Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero, and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.

Moreras theorem

In complex analysis, a branch of mathematics, Morera's theorem, named after Giacinto Morera, gives an important criterion for proving that a function is holomorphic.

In mathematics, smooth functions and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.

Elliptic operator

In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.

In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an underdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of r vector fields mesh into coordinate grids on r-dimensional integral manifolds. The theorem is foundational in differential topology and calculus on manifolds.

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

Smoothness Property measuring how many times a function can be differentiated

In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain. At the very minimum, a function could be considered "smooth" if it is differentiable everywhere. At the other end, it might also possess derivatives of all orders in its domain, in which case it is said to be infinitely differentiable and referred to as a C-infinity function.

Differentiable manifold Manifold upon which it is possible to perform calculus

In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas. One may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart.

In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.

Schwartz space The function space of all functions whose derivatives are rapidly decreasing

In mathematics, Schwartz space is the function space of all functions whose derivatives are rapidly decreasing. This space has the important property that the Fourier transform is an automorphism on this space. This property enables one, by duality, to define the Fourier transform for elements in the dual space of , that is, for tempered distributions. A function in the Schwartz space is sometimes called a Schwartz function.

In mathematics, the class of Muckenhoupt weightsAp consists of those weights ω for which the Hardy–Littlewood maximal operator is bounded on Lp(). Specifically, we consider functions f on Rn and their associated maximal functions M( f ) defined as

References