Non-analytic smooth function

Last updated

In mathematics, smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.

Contents

One of the most important applications of smooth functions with compact support is the construction of so-called mollifiers, which are important in theories of generalized functions, such as Laurent Schwartz's theory of distributions.

The existence of smooth but non-analytic functions represents one of the main differences between differential geometry and analytic geometry. In terms of sheaf theory, this difference can be stated as follows: the sheaf of differentiable functions on a differentiable manifold is fine, in contrast with the analytic case.

The functions below are generally used to build up partitions of unity on differentiable manifolds.

An example function

Definition of the function

The non-analytic smooth function f(x) considered in the article. Non-analytic smooth function.png
The non-analytic smooth function f(x) considered in the article.

Consider the function

defined for every real number x.

The function is smooth

The function f has continuous derivatives of all orders at every point x of the real line. The formula for these derivatives is

where pn(x) is a polynomial of degree n  1 given recursively by p1(x) = 1 and

for any positive integer n. From this formula, it is not completely clear that the derivatives are continuous at 0; this follows from the one-sided limit

for any nonnegative integer m.

Detailed proof of smoothness

By the power series representation of the exponential function, we have for every natural number (including zero)

because all the positive terms for are added. Therefore, dividing this inequality by and taking the limit from above,

We now prove the formula for the nth derivative of f by mathematical induction. Using the chain rule, the reciprocal rule, and the fact that the derivative of the exponential function is again the exponential function, we see that the formula is correct for the first derivative of f for all x > 0 and that p1(x) is a polynomial of degree 0. Of course, the derivative of f is zero for x < 0. It remains to show that the right-hand side derivative of f at x = 0 is zero. Using the above limit, we see that

The induction step from n to n + 1 is similar. For x > 0 we get for the derivative

where pn+1(x) is a polynomial of degree n = (n + 1)  1. Of course, the (n + 1)st derivative of f is zero for x < 0. For the right-hand side derivative of f (n) at x = 0 we obtain with the above limit

The function is not analytic

As seen earlier, the function f is smooth, and all its derivatives at the origin are 0. Therefore, the Taylor series of f at the origin converges everywhere to the zero function,

and so the Taylor series does not equal f(x) for x > 0. Consequently, f is not analytic at the origin.

Smooth transition functions

The smooth transition g from 0 to 1 defined here. Smooth transition from 0 to 1.png
The smooth transition g from 0 to 1 defined here.

The function

has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g(x) = 0 for x  0 and g(x) = 1 for x  1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [a, b] with a < b, consider the function

For real numbers a < b < c < d, the smooth function

equals 1 on the closed interval [b, c] and vanishes outside the open interval (a, d), hence it can serve as a bump function.

A smooth function which is nowhere real analytic

Approximation of the smooth-everywhere, but nowhere-analytic function mentioned here. This partial sum is taken from k=2 to 2 . Smooth non-analytic function.png
Approximation of the smooth-everywhere, but nowhere-analytic function mentioned here. This partial sum is taken from k=2 to 2 .

A more pathological example is an infinitely differentiable function which is not analytic at any point. It can be constructed by means of a Fourier series as follows. Define for all

Since the series converges for all , this function is easily seen to be of class C, by a standard inductive application of the Weierstrass M-test to demonstrate uniform convergence of each series of derivatives.

We now show that is not analytic at any dyadic rational multiple of π, that is, at any with and . Since the sum of the first terms is analytic, we need only consider , the sum of the terms with . For all orders of derivation with , and we have

where we used the fact that for all , and we bounded the first sum from below by the term with . As a consequence, at any such

so that the radius of convergence of the Taylor series of at is 0 by the Cauchy-Hadamard formula. Since the set of analyticity of a function is an open set, and since dyadic rationals are dense, we conclude that , and hence , is nowhere analytic in .

Application to Taylor series

For every sequence α0, α1, α2, . . . of real or complex numbers, the following construction shows the existence of a smooth function F on the real line which has these numbers as derivatives at the origin. [1] In particular, every sequence of numbers can appear as the coefficients of the Taylor series of a smooth function. This result is known as Borel's lemma, after Émile Borel.

With the smooth transition function g as above, define

This function h is also smooth; it equals 1 on the closed interval [1,1] and vanishes outside the open interval (2,2). Using h, define for every natural number n (including zero) the smooth function

which agrees with the monomial xn on [1,1] and vanishes outside the interval (2,2). Hence, the k-th derivative of ψn at the origin satisfies

and the boundedness theorem implies that ψn and every derivative of ψn is bounded. Therefore, the constants

involving the supremum norm of ψn and its first n derivatives, are well-defined real numbers. Define the scaled functions

By repeated application of the chain rule,

and, using the previous result for the k-th derivative of ψn at zero,

It remains to show that the function

is well defined and can be differentiated term-by-term infinitely many times. [2] To this end, observe that for every k

where the remaining infinite series converges by the ratio test.

Application to higher dimensions

The function Ps1(x) in one dimension. Mollifier Illustration.svg
The function Ψ1(x) in one dimension.

For every radius r > 0,

with Euclidean norm ||x|| defines a smooth function on n-dimensional Euclidean space with support in the ball of radius r, but .

Complex analysis

This pathology cannot occur with differentiable functions of a complex variable rather than of a real variable. Indeed, all holomorphic functions are analytic, so that the failure of the function f defined in this article to be analytic in spite of its being infinitely differentiable is an indication of one of the most dramatic differences between real-variable and complex-variable analysis.

Note that although the function f has derivatives of all orders over the real line, the analytic continuation of f from the positive half-line x > 0 to the complex plane, that is, the function

has an essential singularity at the origin, and hence is not even continuous, much less analytic. By the great Picard theorem, it attains every complex value (with the exception of zero) infinitely many times in every neighbourhood of the origin.

See also

Notes

  1. Exercise 12 on page 418 in Walter Rudin, Real and Complex Analysis. McGraw-Hill, New Delhi 1980, ISBN   0-07-099557-5
  2. See e.g. Chapter V, Section 2, Theorem 2.8 and Corollary 2.9 about the differentiability of the limits of sequences of functions in Amann, Herbert; Escher, Joachim (2005), Analysis I, Basel: Birkhäuser Verlag, pp. 373–374, ISBN   3-7643-7153-6

Related Research Articles

In mathematics, the tangent space of a manifold is a generalization of tangent lines to curves in two-dimensional space and tangent planes to surfaces in three-dimensional space in higher dimensions. In the context of physics the tangent space to a manifold at a point can be viewed as the space of possible velocities for a particle moving on the manifold.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.

<span class="mw-page-title-main">Weierstrass elliptic function</span> Class of mathematical functions

In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions are also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice.

The Gram–Charlier A series, and the Edgeworth series are series that approximate a probability distribution in terms of its cumulants. The series are the same; but, the arrangement of terms differ. The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.

<span class="mw-page-title-main">Bump function</span> Smooth and compactly supported function

In mathematics, a bump function is a function on a Euclidean space which is both smooth and compactly supported. The set of all bump functions with domain forms a vector space, denoted or The dual space of this space endowed with a suitable topology is the space of distributions.

In mathematics, the Lerch zeta function, sometimes called the Hurwitz–Lerch zeta function, is a special function that generalizes the Hurwitz zeta function and the polylogarithm. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887.

<span class="mw-page-title-main">Smoothness</span> Number of derivatives of a function (mathematics)

In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called differentiability class. At the very minimum, a function could be considered smooth if it is differentiable everywhere. At the other end, it might also possess derivatives of all orders in its domain, in which case it is said to be infinitely differentiable and referred to as a C-infinity function.

In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.

In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the infinite-dimensional case of a linear operator S mapping a function space V to itself. The corresponding quantity det(S) is called the functional determinant of S.

In mathematics, especially functional analysis, a Fréchet algebra, named after Maurice René Fréchet, is an associative algebra over the real or complex numbers that at the same time is also a Fréchet space. The multiplication operation for is required to be jointly continuous. If is an increasing family of seminorms for the topology of , the joint continuity of multiplication is equivalent to there being a constant and integer for each such that for all . Fréchet algebras are also called B0-algebras.

<span class="mw-page-title-main">Chebyshev function</span>

In mathematics, the Chebyshev function is either a scalarising function (Tchebycheff function) or one of two related functions. The first Chebyshev functionϑ  (x) or θ (x) is given by

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In the science of fluid flow, Stokes' paradox is the phenomenon that there can be no creeping flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial steady-state solution for the Stokes equations around an infinitely long cylinder. This is opposed to the 3-dimensional case, where Stokes' method provides a solution to the problem of flow around a sphere.

In mathematics, moduli of smoothness are used to quantitatively measure smoothness of functions. Moduli of smoothness generalise modulus of continuity and are used in approximation theory and numerical analysis to estimate errors of approximation by polynomials and splines.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.