Mathematical analysis → Complex analysis |
Complex analysis |
---|
Complex numbers |
Complex functions |
Basic theory |
Geometric function theory |
People |
In complex analysis, Liouville's theorem, named after Joseph Liouville (although the theorem was first proven by Cauchy in 1844 [1] ), states that every bounded entire function must be constant. That is, every holomorphic function for which there exists a positive number such that for all is constant. Equivalently, non-constant holomorphic functions on have unbounded images.
The theorem is considerably improved by Picard's little theorem, which says that every entire function whose image omits two or more complex numbers must be constant.
Liouville's theorem: Every holomorphic function for which there exists a positive number such that for all is constant.
More succinctly, Liouville's theorem states that every bounded entire function must be constant.
This important theorem has several proofs.
A standard analytical proof uses the fact that holomorphic functions are analytic.
If is an entire function, it can be represented by its Taylor series about 0:
where (by Cauchy's integral formula)
and is the circle about 0 of radius . Suppose is bounded: i.e. there exists a constant such that for all . We can estimate directly
where in the second inequality we have used the fact that on the circle . (This estimate is known as Cauchy's estimate.) But the choice of in the above is an arbitrary positive number. Therefore, letting tend to infinity (we let tend to infinity since is analytic on the entire plane) gives for all . Thus and this proves the theorem.
Another proof uses the mean value property of harmonic functions.
Given two points, choose two balls with the given points as centers and of equal radius. If the radius is large enough, the two balls will coincide except for an arbitrarily small proportion of their volume. Since is bounded, the averages of it over the two balls are arbitrarily close, and so assumes the same value at any two points.
The proof can be adapted to the case where the harmonic function is merely bounded above or below. See Harmonic function#Liouville's theorem.
There is a short proof of the fundamental theorem of algebra using Liouville's theorem. [3]
Suppose for the sake of contradiction that there is a nonconstant polynomial with no complex root. Note that as . Take a sufficiently large ball ; for some constant there exists a sufficiently large such that for all .
Because has no roots, the function is entire and holomorphic inside , and thus it is also continuous on its closure . By the extreme value theorem, a continuous function on a closed and bounded set obtains its extreme values, implying that for some constant and .
Thus, the function is bounded in , and by Liouville's theorem, is constant, which contradicts our assumption that is nonconstant.
A consequence of the theorem is that "genuinely different" entire functions cannot dominate each other, i.e. if and are entire, and everywhere, then for some complex number . Consider that for the theorem is trivial so we assume . Consider the function . It is enough to prove that can be extended to an entire function, in which case the result follows by Liouville's theorem. The holomorphy of is clear except at points in . But since is bounded and all the zeroes of are isolated, any singularities must be removable. Thus can be extended to an entire bounded function which by Liouville's theorem implies it is constant.
Suppose that is entire and , for . We can apply Cauchy's integral formula; we have that
where is the value of the remaining integral. This shows that is bounded and entire, so it must be constant, by Liouville's theorem. Integrating then shows that is affine and then, by referring back to the original inequality, we have that the constant term is zero.
The theorem can also be used to deduce that the domain of a non-constant elliptic function cannot be . Suppose it was. Then, if and are two periods of such that is not real, consider the parallelogram whose vertices are 0, , , and . Then the image of is equal to . Since is continuous and is compact, is also compact and, therefore, it is bounded. So, is constant.
The fact that the domain of a non-constant elliptic function cannot be is what Liouville actually proved, in 1847, using the theory of elliptic functions. [4] In fact, it was Cauchy who proved Liouville's theorem. [5] [6]
If is a non-constant entire function, then its image is dense in . This might seem to be a much stronger result than Liouville's theorem, but it is actually an easy corollary. If the image of is not dense, then there is a complex number and a real number such that the open disk centered at with radius has no element of the image of . Define
Then is a bounded entire function, since for all ,
So, is constant, and therefore is constant.
Any holomorphic function on a compact Riemann surface is necessarily constant. [7]
Let be holomorphic on a compact Riemann surface . By compactness, there is a point where attains its maximum. Then we can find a chart from a neighborhood of to the unit disk such that is holomorphic on the unit disk and has a maximum at , so it is constant, by the maximum modulus principle.
Let be the one-point compactification of the complex plane . In place of holomorphic functions defined on regions in , one can consider regions in . Viewed this way, the only possible singularity for entire functions, defined on , is the point . If an entire function is bounded in a neighborhood of , then is a removable singularity of , i.e. cannot blow up or behave erratically at . In light of the power series expansion, it is not surprising that Liouville's theorem holds.
Similarly, if an entire function has a pole of order at —that is, it grows in magnitude comparably to in some neighborhood of —then is a polynomial. This extended version of Liouville's theorem can be more precisely stated: if for sufficiently large, then is a polynomial of degree at most . This can be proved as follows. Again take the Taylor series representation of ,
The argument used during the proof using Cauchy estimates shows that for all ,
So, if , then
Therefore, .
Liouville's theorem does not extend to the generalizations of complex numbers known as double numbers and dual numbers. [8]
In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann.
The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis.
In complex analysis, the residue theorem, sometimes called Cauchy's residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals and infinite series as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. The residue theorem should not be confused with special cases of the generalized Stokes' theorem; however, the latter can be used as an ingredient of its proof.
In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where the infinite series representation which initially defined the function becomes divergent.
The theory of functions of several complex variables is the branch of mathematics dealing with functions defined on the complex coordinate space, that is, n-tuples of complex numbers. The name of the field dealing with the properties of these functions is called several complex variables, which the Mathematics Subject Classification has as a top-level heading.
In mathematics, the Hurwitz zeta function is one of the many zeta functions. It is formally defined for complex variables s with Re(s) > 1 and a ≠ 0, −1, −2, … by
In complex analysis, a branch of mathematics, Morera's theorem, named after Giacinto Morera, gives an important criterion for proving that a function is holomorphic.
In mathematics, a Paley–Wiener theorem is a theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. It is named after Raymond Paley (1907–1933) and Norbert Wiener (1894–1964) who, in 1934, introduced various versions of the theorem. The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality.
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.
In mathematics, the Borel–Carathéodory theorem in complex analysis shows that an analytic function may be bounded by its real part. It is an application of the maximum modulus principle. It is named for Émile Borel and Constantin Carathéodory.
In mathematics, the universality of zeta functions is the remarkable ability of the Riemann zeta function and other similar functions to approximate arbitrary non-vanishing holomorphic functions arbitrarily well.
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.
In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function f of a complex argument z and an operator T, the aim is to construct an operator, f(T), which naturally extends the function f from complex argument to operator argument. More precisely, the functional calculus defines a continuous algebra homomorphism from the holomorphic functions on a neighbourhood of the spectrum of T to the bounded operators.
In complex analysis, functional analysis and operator theory, a Bergman space, named after Stefan Bergman, is a function space of holomorphic functions in a domain D of the complex plane that are sufficiently well-behaved at the boundary that they are absolutely integrable. Specifically, for 0 < p < ∞, the Bergman space Ap(D) is the space of all holomorphic functions in D for which the p-norm is finite:
In mathematics, in the area of complex analysis, Nachbin's theorem is a result used to establish bounds on the growth rates for analytic functions. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, also called Nachbin summation.
In complex analysis, a branch of mathematics, the antiderivative, or primitive, of a complex-valued function g is a function whose complex derivative is g. More precisely, given an open set in the complex plane and a function the antiderivative of is a function that satisfies .
In mathematics, Riemann–Hilbert problems, named after Bernhard Riemann and David Hilbert, are a class of problems that arise in the study of differential equations in the complex plane. Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein, Israel Gohberg and others.
In complex analysis and geometric function theory, the Grunsky matrices, or Grunsky operators, are infinite matrices introduced in 1939 by Helmut Grunsky. The matrices correspond to either a single holomorphic function on the unit disk or a pair of holomorphic functions on the unit disk and its complement. The Grunsky inequalities express boundedness properties of these matrices, which in general are contraction operators or in important special cases unitary operators. As Grunsky showed, these inequalities hold if and only if the holomorphic function is univalent. The inequalities are equivalent to the inequalities of Goluzin, discovered in 1947. Roughly speaking, the Grunsky inequalities give information on the coefficients of the logarithm of a univalent function; later generalizations by Milin, starting from the Lebedev–Milin inequality, succeeded in exponentiating the inequalities to obtain inequalities for the coefficients of the univalent function itself. The Grunsky matrix and its associated inequalities were originally formulated in a more general setting of univalent functions between a region bounded by finitely many sufficiently smooth Jordan curves and its complement: the results of Grunsky, Goluzin and Milin generalize to that case.
In mathematics, specifically in complex analysis, Cauchy's estimate gives local bounds for the derivatives of a holomorphic function. These bounds are optimal.