Fixed-point iteration

Last updated

In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.

Contents

More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed-point iteration is which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e.,

More generally, the function can be defined on any metric space with values in that same space.

Examples

The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed-point theorem and so its speed of convergence is very slow. Sine fixed point.svg
The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed-point theorem and so its speed of convergence is very slow.

Attracting fixed points

The fixed point iteration xn+1 = cos xn with initial value x1 = -1. Cosine fixed point.svg
The fixed point iteration xn+1 = cos xn with initial value x1 = −1.

An attracting fixed point of a function f is a fixed point xfix of f with a neighborhood U of "close enough" points around xfix such that for any value of x in U, the fixed-point iteration sequence is contained in U and converges to xfix. The basin of attraction of xfix is the largest such neighborhood U. [1]

The natural cosine function ("natural" means in radians, not degrees or other units) has exactly one fixed point, and that fixed point is attracting. In this case, "close enough" is not a stringent criterion at all—to demonstrate this, start with any real number and repeatedly press the cos key on a calculator (checking first that the calculator is in "radians" mode). It eventually converges to the Dottie number (about 0.739085133), which is a fixed point. That is where the graph of the cosine function intersects the line . [2]

Not all fixed points are attracting. For example, 0 is a fixed point of the function f(x) = 2x, but iteration of this function for any value other than zero rapidly diverges. We say that the fixed point of is repelling.

An attracting fixed point is said to be a stable fixed point if it is also Lyapunov stable.

A fixed point is said to be a neutrally stable fixed point if it is Lyapunov stable but not attracting. The center of a linear homogeneous differential equation of the second order is an example of a neutrally stable fixed point.

Multiple attracting points can be collected in an attracting fixed set.

Banach fixed-point theorem

The Banach fixed-point theorem gives a sufficient condition for the existence of attracting fixed points. A contraction mapping function defined on a complete metric space has precisely one fixed point, and the fixed-point iteration is attracted towards that fixed point for any initial guess in the domain of the function. Common special cases are that (1) is defined on the real line with real values and is Lipschitz continuous with Lipschitz constant , and (2) the function f is continuously differentiable in an open neighbourhood of a fixed point xfix, and .

Although there are other fixed-point theorems, this one in particular is very useful because not all fixed-points are attractive. When constructing a fixed-point iteration, it is very important to make sure it converges to the fixed point. We can usually use the Banach fixed-point theorem to show that the fixed point is attractive.

Attractors

Attracting fixed points are a special case of a wider mathematical concept of attractors. Fixed-point iterations are a discrete dynamical system on one variable. Bifurcation theory studies dynamical systems and classifies various behaviors such as attracting fixed points, periodic orbits, or strange attractors. An example system is the logistic map.

Iterative methods

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. Convergent fixed-point iterations are mathematically rigorous formalizations of iterative methods.

Iterative method examples

  • Newton's method is a root-finding algorithm for finding roots of a given differentiable function . The iteration is If we write , we may rewrite the Newton iteration as the fixed-point iteration . If this iteration converges to a fixed point of g, then , so therefore , that is, is a root of . Under the assumptions of the Banach fixed-point theorem, the Newton iteration, framed as a fixed-point method, demonstrates at least linear convergence. More detailed analysis shows quadratic convergence, i.e., , under certain circumstances.
  • Halley's method is similar to Newton's method when it works correctly, but its error is (cubic convergence). In general, it is possible to design methods that converge with speed for any . As a general rule, the higher the k, the less stable it is, and the more computationally expensive it gets. For these reasons, higher order methods are typically not used.
  • Runge–Kutta methods and numerical ordinary differential equation solvers in general can be viewed as fixed-point iterations. Indeed, the core idea when analyzing the A-stability of ODE solvers is to start with the special case , where is a complex number, and to check whether the ODE solver converges to the fixed point whenever the real part of is negative. [lower-alpha 1]
  • The Picard–Lindelöf theorem, which shows that ordinary differential equations have solutions, is essentially an application of the Banach fixed-point theorem to a special sequence of functions which forms a fixed-point iteration, constructing the solution to the equation. Solving an ODE in this way is called Picard iteration, Picard's method, or the Picard iterative process.
  • The iteration capability in Excel can be used to find solutions to the Colebrook equation to an accuracy of 15 significant figures. [3] [4]
  • Some of the "successive approximation" schemes used in dynamic programming to solve Bellman's functional equation are based on fixed-point iterations in the space of the return function. [5] [6]
  • The cobweb model of price theory corresponds to the fixed-point iteration of the composition of the supply function and the demand function. [7]

Convergence acceleration

The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Anderson acceleration and Aitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.

Chaos game

Sierpinski triangle created using IFS, selecting all members at each iteration Sierpinski Chaos.gif
Sierpinski triangle created using IFS, selecting all members at each iteration

The term chaos game refers to a method of generating the fixed point of any iterated function system (IFS). Starting with any point x0, successive iterations are formed as xk+1 = fr(xk), where fr is a member of the given IFS randomly selected for each iteration. Hence the chaos game is a randomized fixed-point iteration. The chaos game allows plotting the general shape of a fractal such as the Sierpinski triangle by repeating the iterative process a large number of times. More mathematically, the iterations converge to the fixed point of the IFS. Whenever x0 belongs to the attractor of the IFS, all iterations xk stay inside the attractor and, with probability 1, form a dense set in the latter.

See also

Related Research Articles

In mathematics, a contraction mapping, or contraction or contractor, on a metric space (M, d) is a function f from M to itself, with the property that there is some real number such that for all x and y in M,

<span class="mw-page-title-main">Ellipse</span> Plane curve: conic section

In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity , a number ranging from to .

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the i-th approximation is derived from the previous ones.

<span class="mw-page-title-main">Newton's method</span> Algorithm for finding zeros of functions

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

In mathematics, the Banach fixed-point theorem is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922.

In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function, the radius of convergence is the shortest or minimum of all the respective distances calculated from the center of the disk of convergence to the respective singularities of the function.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if A convergent series that is not absolutely convergent is called conditionally convergent.

In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).

<span class="mw-page-title-main">Fixed point (mathematics)</span> Element mapped to itself by a mathematical function

In mathematics, a fixed point, also known as an invariant point, is a value that does not change under a given transformation. Specifically, for functions, a fixed point is an element that is mapped to itself by the function. Any set of fixed points of a transformation is also an invariant set.

In mathematics, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem.

In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.

<span class="mw-page-title-main">Iterated function</span> Result of repeatedly applying a mathematical function

In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated.

In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.

<span class="mw-page-title-main">Sine and cosine</span> Fundamental trigonometric functions

In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle, and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle , the sine and cosine functions are denoted as and .

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.

In mathematics, for a sequence of complex numbers a1, a2, a3, ... the infinite product

In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.

<span class="mw-page-title-main">Dottie number</span> Mathematical constant related to the cosine function

In mathematics, the Dottie number is a constant that is the unique real root of the equation

Convergence proof techniques are canonical patterns of mathematical proofs that sequences or functions converge to a finite limit when the argument tends to infinity.

References

  1. One may also consider certain iterations A-stable if the iterates stay bounded for a long time, which is beyond the scope of this article.
  1. Rassias, Themistocles M.; Pardalos, Panos M. (17 September 2014). Mathematics Without Boundaries: Surveys in Pure Mathematics. Springer. ISBN   978-1-4939-1106-6.
  2. Weisstein, Eric W. "Dottie Number". Wolfram MathWorld. Wolfram Research, Inc. Retrieved 23 July 2016.
  3. M A Kumar (2010), Solve Implicit Equations (Colebrook) Within Worksheet, Createspace, ISBN   1-4528-1619-0
  4. Brkic, Dejan (2017) Solution of the Implicit Colebrook Equation for Flow Friction Using Excel, Spreadsheets in Education (eJSiE): Vol. 10: Iss. 2, Article 2. Available at: https://sie.scholasticahq.com/article/4663-solution-of-the-implicit-colebrook-equation-for-flow-friction-using-excel
  5. Bellman, R. (1957). Dynamic programming, Princeton University Press.
  6. Sniedovich, M. (2010). Dynamic Programming: Foundations and Principles, Taylor & Francis.
  7. Onozaki, Tamotsu (2018). "Chapter 2. One-Dimensional Nonlinear Cobweb Model". Nonlinearity, Bounded Rationality, and Heterogeneity: Some Aspects of Market Economies as Complex Systems. Springer. ISBN   978-4-431-54971-0.

Further reading