In mathematics, smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.
One of the most important applications of smooth functions with compact support is the construction of so-called mollifiers, which are important in theories of generalized functions, such as Laurent Schwartz's theory of distributions.
The existence of smooth but non-analytic functions represents one of the main differences between differential geometry and analytic geometry. In terms of sheaf theory, this difference can be stated as follows: the sheaf of differentiable functions on a differentiable manifold is fine, in contrast with the analytic case.
The functions below are generally used to build up partitions of unity on differentiable manifolds.
Consider the function
defined for every real number x.
The function f has continuous derivatives of all orders at every point x of the real line. The formula for these derivatives is
where pn(x) is a polynomial of degree n − 1 given recursively by p1(x) = 1 and
for any positive integer n. From this formula, it is not completely clear that the derivatives are continuous at 0; this follows from the one-sided limit
for any nonnegative integer m.
Detailed proof of smoothness |
---|
By the power series representation of the exponential function, we have for every natural number (including zero) because all the positive terms for are added. Therefore, dividing this inequality by and taking the limit from above, We now prove the formula for the nth derivative of f by mathematical induction. Using the chain rule, the reciprocal rule, and the fact that the derivative of the exponential function is again the exponential function, we see that the formula is correct for the first derivative of f for all x > 0 and that p1(x) is a polynomial of degree 0. Of course, the derivative of f is zero for x < 0. It remains to show that the right-hand side derivative of f at x = 0 is zero. Using the above limit, we see that The induction step from n to n + 1 is similar. For x > 0 we get for the derivative where pn+1(x) is a polynomial of degree n = (n + 1) − 1. Of course, the (n + 1)st derivative of f is zero for x < 0. For the right-hand side derivative of f (n) at x = 0 we obtain with the above limit |
As seen earlier, the function f is smooth, and all its derivatives at the origin are 0. Therefore, the Taylor series of f at the origin converges everywhere to the zero function,
and so the Taylor series does not equal f(x) for x > 0. Consequently, f is not analytic at the origin.
The function
has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g(x) = 0 for x ≤ 0 and g(x) = 1 for x ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [a, b] with a < b, consider the function
For real numbers a < b < c < d, the smooth function
equals 1 on the closed interval [b, c] and vanishes outside the open interval (a, d), hence it can serve as a bump function.
A more pathological example is an infinitely differentiable function which is not analytic at any point. It can be constructed by means of a Fourier series as follows. Define for all
Since the series converges for all , this function is easily seen to be of class C∞, by a standard inductive application of the Weierstrass M-test to demonstrate uniform convergence of each series of derivatives.
We now show that is not analytic at any dyadic rational multiple of π, that is, at any with and . Since the sum of the first terms is analytic, we need only consider , the sum of the terms with . For all orders of derivation with , and we have
where we used the fact that for all , and we bounded the first sum from below by the term with . As a consequence, at any such
so that the radius of convergence of the Taylor series of at is 0 by the Cauchy-Hadamard formula. Since the set of analyticity of a function is an open set, and since dyadic rationals are dense, we conclude that , and hence , is nowhere analytic in .
For every sequence α0, α1, α2, . . . of real or complex numbers, the following construction shows the existence of a smooth function F on the real line which has these numbers as derivatives at the origin. [1] In particular, every sequence of numbers can appear as the coefficients of the Taylor series of a smooth function. This result is known as Borel's lemma, after Émile Borel.
With the smooth transition function g as above, define
This function h is also smooth; it equals 1 on the closed interval [−1,1] and vanishes outside the open interval (−2,2). Using h, define for every natural number n (including zero) the smooth function
which agrees with the monomial xn on [−1,1] and vanishes outside the interval (−2,2). Hence, the k-th derivative of ψn at the origin satisfies
and the boundedness theorem implies that ψn and every derivative of ψn is bounded. Therefore, the constants
involving the supremum norm of ψn and its first n derivatives, are well-defined real numbers. Define the scaled functions
By repeated application of the chain rule,
and, using the previous result for the k-th derivative of ψn at zero,
It remains to show that the function
is well defined and can be differentiated term-by-term infinitely many times. [2] To this end, observe that for every k
where the remaining infinite series converges by the ratio test.
For every radius r > 0,
with Euclidean norm ||x|| defines a smooth function on n-dimensional Euclidean space with support in the ball of radius r, but .
This pathology cannot occur with differentiable functions of a complex variable rather than of a real variable. Indeed, all holomorphic functions are analytic, so that the failure of the function f defined in this article to be analytic in spite of its being infinitely differentiable is an indication of one of the most dramatic differences between real-variable and complex-variable analysis.
Note that although the function f has derivatives of all orders over the real line, the analytic continuation of f from the positive half-line x > 0 to the complex plane, that is, the function
has an essential singularity at the origin, and hence is not even continuous, much less analytic. By the great Picard theorem, it attains every complex value (with the exception of zero) infinitely many times in every neighbourhood of the origin.
In mathematics, the tangent space of a manifold is a generalization of tangent lines to curves in two-dimensional space and tangent planes to surfaces in three-dimensional space in higher dimensions. In the context of physics the tangent space to a manifold at a point can be viewed as the space of possible velocities for a particle moving on the manifold.
In mathematics, a partition of unity of a topological space is a set of continuous functions from to the unit interval [0,1] such that for every point :
Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.
The Gram–Charlier A series, and the Edgeworth series are series that approximate a probability distribution in terms of its cumulants. The series are the same; but, the arrangement of terms differ. The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.
In mathematics, a bump function is a function on a Euclidean space which is both smooth and compactly supported. The set of all bump functions with domain forms a vector space, denoted or The dual space of this space endowed with a suitable topology is the space of distributions.
In mathematics, the Lerch zeta function, sometimes called the Hurwitz–Lerch zeta function, is a special function that generalizes the Hurwitz zeta function and the polylogarithm. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887.
In mathematical analysis, the smoothness of a function is a property measured by the number, called differentiability class, of continuous derivatives it has over its domain.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.
In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the infinite-dimensional case of a linear operator S mapping a function space V to itself. The corresponding quantity det(S) is called the functional determinant of S.
In mathematics, especially functional analysis, a Fréchet algebra, named after Maurice René Fréchet, is an associative algebra over the real or complex numbers that at the same time is also a Fréchet space. The multiplication operation for is required to be jointly continuous. If is an increasing family of seminorms for the topology of , the joint continuity of multiplication is equivalent to there being a constant and integer for each such that for all . Fréchet algebras are also called B0-algebras.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
In mathematics, moduli of smoothness are used to quantitatively measure smoothness of functions. Moduli of smoothness generalise modulus of continuity and are used in approximation theory and numerical analysis to estimate errors of approximation by polynomials and splines.
In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.