In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. [1] The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922. [2] [3]
Definition. Let be a metric space. Then a map is called a contraction mapping on X if there exists such that
for all
Banach fixed-point theorem. Let be a non-empty complete metric space with a contraction mapping Then T admits a unique fixed-point in X (i.e. ). Furthermore, can be found as follows: start with an arbitrary element and define a sequence by for Then .
Remark 1. The following inequalities are equivalent and describe the speed of convergence:
Any such value of q is called a Lipschitz constant for , and the smallest one is sometimes called "the best Lipschitz constant" of .
Remark 2. for all is in general not enough to ensure the existence of a fixed point, as is shown by the map
which lacks a fixed point. However, if is compact, then this weaker assumption does imply the existence and uniqueness of a fixed point, that can be easily found as a minimizer of , indeed, a minimizer exists by compactness, and has to be a fixed point of It then easily follows that the fixed point is the limit of any sequence of iterations of
Remark 3. When using the theorem in practice, the most difficult part is typically to define properly so that
Let be arbitrary and define a sequence by setting . We first note that for all we have the inequality
This follows by induction on , using the fact that is a contraction mapping. Then we can show that is a Cauchy sequence. In particular, let such that :
Let be arbitrary. Since , we can find a large so that
Therefore, by choosing and greater than we may write:
This proves that the sequence is Cauchy. By completeness of , the sequence has a limit Furthermore, must be a fixed point of :
As a contraction mapping, is continuous, so bringing the limit inside was justified. Lastly, cannot have more than one fixed point in , since any pair of distinct fixed points and would contradict the contraction of :
Several converses of the Banach contraction principle exist. The following is due to Czesław Bessaga, from 1959:
Let f : X → X be a map of an abstract set such that each iterate fn has a unique fixed point. Let then there exists a complete metric on X such that f is contractive, and q is the contraction constant.
Indeed, very weak assumptions suffice to obtain such a kind of converse. For example if is a map on a T1 topological space with a unique fixed point a, such that for each we have fn(x) → a, then there already exists a metric on X with respect to which f satisfies the conditions of the Banach contraction principle with contraction constant 1/2. [8] In this case the metric is in fact an ultrametric.
There are a number of generalizations (some of which are immediate corollaries). [9]
Let T : X → X be a map on a complete non-empty metric space. Then, for example, some generalizations of the Banach fixed-point theorem are:
In applications, the existence and uniqueness of a fixed point often can be shown directly with the standard Banach fixed point theorem, by a suitable choice of the metric that makes the map T a contraction. Indeed, the above result by Bessaga strongly suggests to look for such a metric. See also the article on fixed point theorems in infinite-dimensional spaces for generalizations.
In a non-empty compact metric space, any function satisfying for all distinct , has a unique fixed point. The proof is simpler than the Banach theorem, because the function is continuous, and therefore assumes a minimum, which is easily shown to be zero.
A different class of generalizations arise from suitable generalizations of the notion of metric space, e.g. by weakening the defining axioms for the notion of metric. [10] Some of these have applications, e.g., in the theory of programming semantics in theoretical computer science. [11]
An application of the Banach fixed-point theorem and fixed-point iteration can be used to quickly obtain an approximation of π with high accuracy. Consider the function . It can be verified that π is a fixed point of f, and that f maps the interval to itself. Moreover, , and it can be verified that
on this interval. Therefore, by an application of the mean value theorem, f has a Lipschitz constant less than 1 (namely ). Applying the Banach fixed-point theorem shows that the fixed point π is the unique fixed point on the interval, allowing for fixed-point iteration to be used.
For example, the value 3 may be chosen to start the fixed-point iteration, as . The Banach fixed-point theorem may be used to conclude that
Applying f to 3 only three times already yields an expansion of π accurate to 33 digits:
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
In mathematics, a contraction mapping, or contraction or contractor, on a metric space (M, d) is a function f from M to itself, with the property that there is some real number such that for all x and y in M,
In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exists a real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; the smallest such bound is called the Lipschitz constant of the function. For instance, every function that is defined on an interval and has a bounded first derivative is Lipschitz continuous.
The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.
In mathematics, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem.
In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function f : I → R admits ω as a modulus of continuity if
In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.
In mathematics, weak convergence in a Hilbert space is the convergence of a sequence of points in the weak topology.
In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are real constants C ≥ 0, α > 0, such that for all x and y in the domain of f. More generally, the condition can be formulated for functions between any two metric spaces. The number is called the exponent of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant. If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder. If , the function is simply bounded.
In mathematics, a π-system on a set is a collection of certain subsets of such that
In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.
The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.
In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions, where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. It was introduced by Charles Stein, who first published it in 1972, to obtain a bound between the distribution of a sum of -dependent sequence of random variables and a standard normal distribution in the Kolmogorov (uniform) metric and hence to prove not only a central limit theorem, but also bounds on the rates of convergence for the given metric.
In mathematics, the Earle–Hamilton fixed point theorem is a result in geometric function theory giving sufficient conditions for a holomorphic mapping of an open domain in a complex Banach space into itself to have a fixed point. The result was proved in 1968 by Clifford Earle and Richard S. Hamilton by showing that, with respect to the Carathéodory metric on the domain, the holomorphic mapping becomes a contraction mapping to which the Banach fixed-point theorem can be applied.
In functional analysis, every C*-algebra is isomorphic to a subalgebra of the C*-algebra of bounded linear operators on some Hilbert space This article describes the spectral theory of closed normal subalgebras of . A subalgebra of is called normal if it is commutative and closed under the operation: for all , we have and that .
This is a glossary for the terminology in a mathematical field of functional analysis.
This article incorporates material from Banach fixed point theorem on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.