Koebe quarter theorem

Last updated

In complex analysis, a branch of mathematics, the Koebe 1/4 theorem states the following:

Contents

Koebe Quarter Theorem. The image of an injective analytic function from the unit disk onto a subset of the complex plane contains the disk whose center is and whose radius is .

The theorem is named after Paul Koebe, who conjectured the result in 1907. The theorem was proven by Ludwig Bieberbach in 1916. The example of the Koebe function shows that the constant in the theorem cannot be improved (increased).

A related result is the Schwarz lemma, and a notion related to both is conformal radius.

Grönwall's area theorem

Suppose that

is univalent in . Then

In fact, if , the complement of the image of the disk is a bounded domain . Its area is given by

Since the area is positive, the result follows by letting decrease to . The above proof shows equality holds if and only if the complement of the image of has zero area, i.e. Lebesgue measure zero.

This result was proved in 1914 by the Swedish mathematician Thomas Hakon Grönwall.

Koebe function

The Koebe function is defined by

Application of the theorem to this function shows that the constant in the theorem cannot be improved, as the image domain does not contain the point and so cannot contain any disk centred at with radius larger than .

The rotated Koebe function is

with a complex number of absolute value . The Koebe function and its rotations are schlicht : that is, univalent (analytic and one-to-one) and satisfying and .

Bieberbach's coefficient inequality for univalent functions

Let

be univalent in . Then

This follows by applying Gronwall's area theorem to the odd univalent function

Equality holds if and only if is a rotated Koebe function.

This result was proved by Ludwig Bieberbach in 1916 and provided the basis for his celebrated conjecture that , proved in 1985 by Louis de Branges.

Proof of quarter theorem

Applying an affine map, it can be assumed that

so that

In particular, the coefficient inequality gives that . If is not in , then

is univalent in .

Applying the coefficient inequality to gives

so that

Koebe distortion theorem

The Koebe distortion theorem gives a series of bounds for a univalent function and its derivative. It is a direct consequence of Bieberbach's inequality for the second coefficient and the Koebe quarter theorem. [1]

Let be a univalent function on normalized so that and and let . Then

with equality if and only if is a Koebe function

Notes

  1. Pommerenke 1975 , pp. 21–22

Related Research Articles

<span class="mw-page-title-main">Riemann mapping theorem</span>

In complex analysis, the Riemann mapping theorem states that if is a non-empty simply connected open subset of the complex number plane which is not all of , then there exists a biholomorphic mapping from onto the open unit disk

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

In complex analysis, de Branges's theorem, or the Bieberbach conjecture, is a theorem that gives a necessary condition on a holomorphic function in order for it to map the open unit disk of the complex plane injectively to the complex plane. It was posed by Ludwig Bieberbach and finally proven by Louis de Branges.

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

<span class="mw-page-title-main">Schwarz lemma</span>

In mathematics, the Schwarz lemma, named after Hermann Amandus Schwarz, is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than deeper theorems, such as the Riemann mapping theorem, which it helps to prove. It is, however, one of the simplest results capturing the rigidity of holomorphic functions.

<span class="mw-page-title-main">Real coordinate space</span> Space formed by the n-tuples of real numbers

In mathematics, the real coordinate space or real coordinate n-space, of dimension n, denoted Rn or , is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as coordinate vectors. Special cases are called the real lineR1, the real coordinate planeR2, and the real coordinate three-dimensional spaceR3. With component-wise addition and scalar multiplication, it is a real vector space.

In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In mathematics, the conformal radius is a way to measure the size of a simply connected planar domain D viewed from a point z in it. As opposed to notions using Euclidean distance, this notion is well-suited to use in complex analysis, in particular in conformal maps and conformal geometry.

In mathematics, Grunsky's theorem, due to the German mathematician Helmut Grunsky, is a result in complex analysis concerning holomorphic univalent functions defined on the unit disk in the complex numbers. The theorem states that a univalent function defined on the unit disc, fixing the point 0, maps every disk |z| < r onto a starlike domain for r ≤ tanh π/4. The largest r for which this is true is called the radius of starlikeness of the function.

<span class="mw-page-title-main">Grunsky matrix</span>

In complex analysis and geometric function theory, the Grunsky matrices, or Grunsky operators, are infinite matrices introduced in 1939 by Helmut Grunsky. The matrices correspond to either a single holomorphic function on the unit disk or a pair of holomorphic functions on the unit disk and its complement. The Grunsky inequalities express boundedness properties of these matrices, which in general are contraction operators or in important special cases unitary operators. As Grunsky showed, these inequalities hold if and only if the holomorphic function is univalent. The inequalities are equivalent to the inequalities of Goluzin, discovered in 1947. Roughly speaking, the Grunsky inequalities give information on the coefficients of the logarithm of a univalent function; later generalizations by Milin, starting from the Lebedev–Milin inequality, succeeded in exponentiating the inequalities to obtain inequalities for the coefficients of the univalent function itself. The Grunsky matrix and its associated inequalities were originally formulated in a more general setting of univalent functions between a region bounded by finitely many sufficiently smooth Jordan curves and its complement: the results of Grunsky, Goluzin and Milin generalize to that case.

In mathematics, the Loewner differential equation, or Loewner equation, is an ordinary differential equation discovered by Charles Loewner in 1923 in complex analysis and geometric function theory. Originally introduced for studying slit mappings, Loewner's method was later developed in 1943 by the Russian mathematician Pavel Parfenevich Kufarev (1909–1968). Any family of domains in the complex plane that expands continuously in the sense of Carathéodory to the whole plane leads to a one parameter family of conformal mappings, called a Loewner chain, as well as a two parameter family of holomorphic univalent self-mappings of the unit disk, called a Loewner semigroup. This semigroup corresponds to a time dependent holomorphic vector field on the disk given by a one parameter family of holomorphic functions on the disk with positive real part. The Loewner semigroup generalizes the notion of a univalent semigroup.

References