Fixed-point iteration

Last updated

In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.

Contents

More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed-point iteration is

which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e.,

More generally, the function can be defined on any metric space with values in that same space.

Examples

The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed-point theorem and so its speed of convergence is very slow. Sine fixed point.svg
The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed-point theorem and so its speed of convergence is very slow.

Attracting fixed points

The fixed point iteration xn+1 = cos xn with initial value x1 = -1. Cosine fixed point.svg
The fixed point iteration xn+1 = cos xn with initial value x1 = −1.

An attracting fixed point of a function f is a fixed point xfix of f such that for any value of x in the domain that is close enough to xfix, the fixed-point iteration sequence

converges to xfix.

The natural cosine function ("natural" means in radians, not degrees or other units) has exactly one fixed point, and that fixed point is attracting. In this case, "close enough" is not a stringent criterion at all—to demonstrate this, start with any real number and repeatedly press the cos key on a calculator (checking first that the calculator is in "radians" mode). It eventually converges to the Dottie number (about 0.739085133), which is a fixed point. That is where the graph of the cosine function intersects the line . [1]

Not all fixed points are attracting. For example, 0 is a fixed point of the function f(x) = 2x, but iteration of this function for any value other than zero rapidly diverges. We say that the fixed point of is repelling.

An attracting fixed point is said to be a stable fixed point if it is also Lyapunov stable.

A fixed point is said to be a neutrally stable fixed point if it is Lyapunov stable but not attracting. The center of a linear homogeneous differential equation of the second order is an example of a neutrally stable fixed point.

Multiple attracting points can be collected in an attracting fixed set.

Banach fixed-point theorem

The Banach fixed-point theorem gives a sufficient condition for the existence of attracting fixed points. A contraction mapping function defined on a complete metric space has precisely one fixed point, and the fixed-point iteration is attracted towards that fixed point for any initial guess in the domain of the function. Common special cases are that (1) is defined on the real line with real values and is Lipschitz continuous with Lipschitz constant , and (2) the function f is continuously differentiable in an open neighbourhood of a fixed point xfix, and .

Although there are other fixed-point theorems, this one in particular is very useful because not all fixed-points are attractive. When constructing a fixed-point iteration, it is very important to make sure it converges to the fixed point. We can usually use the Banach fixed-point theorem to show that the fixed point is attractive.

Attractors

Attracting fixed points are a special case of a wider mathematical concept of attractors. Fixed-point iterations are a discrete dynamical system on one variable. Bifurcation theory studies dynamical systems and classifies various behaviors such as attracting fixed points, periodic orbits, or strange attractors. An example system is the logistic map.

Iterative methods

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. Convergent fixed-point iterations are mathematically rigorous formalizations of iterative methods.

Iterative method examples

  • Newton's method is a root-finding algorithm for finding roots of a given differentiable function . The iteration is If we write , we may rewrite the Newton iteration as the fixed-point iteration . If this iteration converges to a fixed point of g, then , so therefore , that is, is a root of . Under the assumptions of the Banach fixed-point theorem, the Newton iteration, framed as the fixed-point method, demonstrates linear convergence. However, a more detailed analysis shows quadratic convergence, i.e., , under certain circumstances.
  • Halley's method is similar to Newton's method when it works correctly, but its error is (cubic convergence). In general, it is possible to design methods that converge with speed for any . As a general rule, the higher the k, the less stable it is, and the more computationally expensive it gets. For these reasons, higher order methods are typically not used.
  • Runge–Kutta methods and numerical ordinary differential equation solvers in general can be viewed as fixed-point iterations. Indeed, the core idea when analyzing the A-stability of ODE solvers is to start with the special case , where is a complex number, and to check whether the ODE solver converges to the fixed point whenever the real part of is negative. [lower-alpha 1]
  • The Picard–Lindelöf theorem, which shows that ordinary differential equations have solutions, is essentially an application of the Banach fixed-point theorem to a special sequence of functions which forms a fixed-point iteration, constructing the solution to the equation. Solving an ODE in this way is called Picard iteration, Picard's method, or the Picard iterative process.
  • The iteration capability in Excel can be used to find solutions to the Colebrook equation to an accuracy of 15 significant figures. [2] [3]
  • Some of the "successive approximation" schemes used in dynamic programming to solve Bellman's functional equation are based on fixed-point iterations in the space of the return function. [4] [5]
  • The cobweb model of price theory corresponds to the fixed-point iteration of the composition of the supply function and the demand function. [6]

Convergence acceleration

The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Anderson acceleration and Aitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.

Chaos game

Sierpinski triangle created using IFS, selecting all members at each iteration Sierpinski Chaos.gif
Sierpinski triangle created using IFS, selecting all members at each iteration

The term chaos game refers to a method of generating the fixed point of any iterated function system (IFS). Starting with any point x0, successive iterations are formed as xk+1 = fr(xk), where fr is a member of the given IFS randomly selected for each iteration. Hence the chaos game is a randomized fixed-point iteration. The chaos game allows plotting the general shape of a fractal such as the Sierpinski triangle by repeating the iterative process a large number of times. More mathematically, the iterations converge to the fixed point of the IFS. Whenever x0 belongs to the attractor of the IFS, all iterations xk stay inside the attractor and, with probability 1, form a dense set in the latter.

See also

Related Research Articles

In mathematics, a contraction mapping, or contraction or contractor, on a metric space (M, d) is a function f from M to itself, with the property that there is some real number such that for all x and y in M,

<span class="mw-page-title-main">Ellipse</span> Plane curve: conic section

In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity , a number ranging from to .

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a single-variable function f defined for a real variable x, the function's derivative f, and an initial guess x0 for a root of f. If the function satisfies sufficient assumptions and the initial guess is close, then

In mathematics, the Banach fixed-point theorem is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922.

<span class="mw-page-title-main">Julia set</span> Fractal sets in complex dynamics of mathematics

In the context of complex dynamics, a branch of mathematics, the Julia set and the Fatou set are two complementary sets defined from a function. Informally, the Fatou set of the function consists of values with the property that all nearby values behave similarly under repeated iteration of the function, and the Julia set consists of values such that an arbitrarily small perturbation can cause drastic changes in the sequence of iterated function values. Thus the behavior of the function on the Fatou set is "regular", while on the Julia set its behavior is "chaotic".

In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function, the radius of convergence is the shortest or minimum of all the respective distances calculated from the center of the disk of convergence to the respective singularities of the function.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).

In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function. In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for complex holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth.

<span class="mw-page-title-main">Picard–Lindelöf theorem</span> Existence and uniqueness of solutions to initial value problems

In mathematics – specifically, in differential equations – the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem.

<span class="mw-page-title-main">Rate of convergence</span> Speed of convergence of a mathematical sequence

In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and rate of convergence if

<span class="mw-page-title-main">Iterated function</span> Result of repeatedly applying a mathematical function

In mathematics, an iterated function is a function X → X which is obtained by composing another function f : X → X with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again in the function as input, and this process is repeated. For example on the image on the right:

In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.

<span class="mw-page-title-main">Puiseux series</span> Power series with rational exponents

In mathematics, Puiseux series are a generalization of power series that allow for negative and fractional exponents of the indeterminate. For example, the series

<span class="mw-page-title-main">Stability theory</span> Part of mathematics that addresses the stability of solutions

In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.

In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's method does.

In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.

<span class="mw-page-title-main">Dottie number</span> Mathematical constant related to the cosine function

In mathematics, the Dottie number is a constant that is the unique real root of the equation

References

  1. One may also consider certain iterations A-stable if the iterates stay bounded for a long time, which is beyond the scope of this article.
  1. Weisstein, Eric W. "Dottie Number". Wolfram MathWorld. Wolfram Research, Inc. Retrieved 23 July 2016.
  2. M A Kumar (2010), Solve Implicit Equations (Colebrook) Within Worksheet, Createspace, ISBN   1-4528-1619-0
  3. Brkic, Dejan (2017) Solution of the Implicit Colebrook Equation for Flow Friction Using Excel, Spreadsheets in Education (eJSiE): Vol. 10: Iss. 2, Article 2. Available at: https://sie.scholasticahq.com/article/4663-solution-of-the-implicit-colebrook-equation-for-flow-friction-using-excel
  4. Bellman, R. (1957). Dynamic programming, Princeton University Press.
  5. Sniedovich, M. (2010). Dynamic Programming: Foundations and Principles, Taylor & Francis.
  6. Onozaki, Tamotsu (2018). "Chapter 2. One-Dimensional Nonlinear Cobweb Model". Nonlinearity, Bounded Rationality, and Heterogeneity: Some Aspects of Market Economies as Complex Systems. Springer. ISBN   978-4-431-54971-0.

Further reading