Von Neumann stability analysis

Last updated

In numerical analysis, von Neumann stability analysis (also known as Fourier stability analysis) is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations. [1] The analysis is based on the Fourier decomposition of numerical error and was developed at Los Alamos National Laboratory after having been briefly described in a 1947 article by British researchers Crank and Nicolson. [2] This method is an example of explicit time integration where the function that defines governing equation is evaluated at the current time. Later, the method was given a more rigorous treatment in an article [3] co-authored by John von Neumann.

Contents

Numerical stability

The stability of numerical schemes is closely associated with numerical error. A finite difference scheme is stable if the errors made at one time step of the calculation do not cause the errors to be magnified as the computations are continued. A neutrally stable scheme is one in which errors remain constant as the computations are carried forward. If the errors decay and eventually damp out, the numerical scheme is said to be stable. If, on the contrary, the errors grow with time the numerical scheme is said to be unstable. The stability of numerical schemes can be investigated by performing von Neumann stability analysis. For time-dependent problems, stability guarantees that the numerical method produces a bounded solution whenever the solution of the exact differential equation is bounded. Stability, in general, can be difficult to investigate, especially when the equation under consideration is nonlinear.

In certain cases, von Neumann stability is necessary and sufficient for stability in the sense of Lax–Richtmyer (as used in the Lax equivalence theorem): The PDE and the finite difference scheme models are linear; the PDE is constant-coefficient with periodic boundary conditions and has only two independent variables; and the scheme uses no more than two time levels. [4] Von Neumann stability is necessary in a much wider variety of cases. It is often used in place of a more detailed stability analysis to provide a good guess at the restrictions (if any) on the step sizes used in the scheme because of its relative simplicity.

Illustration of the method

The von Neumann method is based on the decomposition of the errors into Fourier series. To illustrate the procedure, consider the one-dimensional heat equation

defined on the spatial interval , with the notation

We can discretize the heat equation [5] as

where

Then the solution of the discrete equation approximates the analytical solution of the PDE on the grid.

Define the round-off error as

where is the solution of the discretized equation ( 1 ) that would be computed in the absence of round-off error, and is the numerical solution obtained in finite precision arithmetic. Since the exact solution must satisfy the discretized equation exactly, the error must also satisfy the discretized equation. [6] Here we assumed that satisfies the equation, too (this is only true in machine precision). Thus

is a recurrence relation for the error. Equations ( 1 ) and ( 2 ) show that both the error and the numerical solution have the same growth or decay behavior with respect to time. For linear differential equations with periodic boundary condition, the spatial variation of error may be expanded in a finite Fourier series with respect to , in the interval , as

where the wavenumber with and . The time dependence of the error is included by assuming that the amplitude of error is a function of time. Often the assumption is made that the error grows or decays exponentially with time, but this is not necessary for the stability analysis.

If the boundary condition is not periodic, then we may use the finite Fourier integral with respect to :

Since the difference equation for error is linear (the behavior of each term of the series is the same as series itself), it is enough to consider the growth of error of a typical term:

if a Fourier series is used or

if a Fourier integral is used.

As the Fourier series can be considered to be a special case of the Fourier integral, we will continue the development using the expressions for the Fourier integral.

The stability characteristics can be studied using just this form for the error with no loss in generality. To find out how error varies in steps of time, substitute equation ( 5b ) into equation ( 2 ), after noting that

to yield (after simplification)

Introducing and using the identities

equation ( 6 ) may be written as

Define the amplification factor

The necessary and sufficient condition for the error to remain bounded is that Thus, from equations ( 7 ) and ( 8 ), the condition for stability is given by

Note that the term is always positive. Thus, to satisfy Equation ( 9 ):

For the above condition to hold for all (and therefore all ). The highest value the sinusoidal term can take is 1 and for that particular choice if the upper threshold condition is satisfied, then so will be for all grid points, thus we have

Equation ( 11 ) gives the stability requirement for the FTCS scheme as applied to one-dimensional heat equation. It says that for a given , the allowed value of must be small enough to satisfy equation ( 10 ).

Similar analysis shows that a FTCS scheme for linear advection is unconditionally unstable.

Related Research Articles

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the linear partial differential equation:

<span class="mw-page-title-main">Arc length</span> Distance along a curve

Arc length is the distance between two points along a section of a curve.

In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.

<span class="mw-page-title-main">Elastic instability</span>

Elastic instability is a form of instability occurring in elastic systems, such as buckling of beams and plates subject to large compressive loads.

Nonstandard finite difference schemes is a general set of methods in numerical analysis that gives numerical solutions to differential equations by constructing a discrete model. The general rules for such schemes are not precisely known.

The Fréedericksz transition is a phase transition in liquid crystals produced when a sufficiently strong electric or magnetic field is applied to a liquid crystal in an undistorted state. Below a certain field threshold the director remains undistorted. As the field value is gradually increased from this threshold, the director begins to twist until it is aligned with the field. In this fashion the Fréedericksz transition can occur in three different configurations known as the twist, bend, and splay geometries. The phase transition was first observed by Fréedericksz and Repiewa in 1927. In this first experiment of theirs, one of the walls of the cell was concave so as to produce a variation in thickness along the cell. The phase transition is named in honor of the Russian physicist Vsevolod Frederiks.

The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.

In quantum computing, the quantum phase estimation algorithm is a quantum algorithm to estimate the phase corresponding to an eigenvalue of a given unitary operator. Because the eigenvalues of a unitary operator always have unit modulus, they are characterized by their phase, and therefore the algorithm can be equivalently described as retrieving either the phase or the eigenvalue itself. The algorithm was initially introduced by Alexei Kitaev in 1995.

In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.

<span class="mw-page-title-main">Theta model</span>

The theta model, or Ermentrout–Kopell canonical model, is a biological neuron model originally developed to mathematically describe neurons in the animal Aplysia. The model is particularly well-suited to describe neural bursting, which is characterized by periodic transitions between rapid oscillations in the membrane potential followed by quiescence. This bursting behavior is often found in neurons responsible for controlling and maintaining steady rhythms such as breathing, swimming, and digesting. Of the three main classes of bursting neurons, the theta model describes parabolic bursting, which is characterized by a parabolic frequency curve during each burst.

Unsteady flows are characterized as flows in which the properties of the fluid are time dependent. It gets reflected in the governing equations as the time derivative of the properties are absent. For Studying Finite-volume method for unsteady flow there is some governing equations >

The convection–diffusion equation describes the flow of heat, particles, or other physical quantities in situations where there is both diffusion and convection or advection. For information about the equation, its derivation, and its conceptual importance and consequences, see the main article convection–diffusion equation. This article describes how to use a computer to calculate an approximate numerical solution of the discretized equation, in a time-dependent situation.

In numerical mathematics, Beam and Warming scheme or Beam–Warming implicit scheme introduced in 1978 by Richard M. Beam and R. F. Warming, is a second order accurate implicit scheme, mainly used for solving non-linear hyperbolic equations. It is not used much nowadays.

Quantum counting algorithm is a quantum algorithm for efficiently counting the number of solutions for a given search problem. The algorithm is based on the quantum phase estimation algorithm and on Grover's search algorithm.

In combustion, Frank-Kamenetskii theory explains the thermal explosion of a homogeneous mixture of reactants, kept inside a closed vessel with constant temperature walls. It is named after a Russian scientist David A. Frank-Kamenetskii, who along with Nikolay Semenov developed the theory in the 1930s.

References

  1. Analysis of Numerical Methods by E. Isaacson, H. B. Keller
  2. Crank, J.; Nicolson, P. (1947), "A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of Heat Conduction Type", Proc. Camb. Phil. Soc., 43: 50–67, doi:10.1007/BF02127704
  3. Charney, J. G.; Fjørtoft, R.; von Neumann, J. (1950), "Numerical Integration of the Barotropic Vorticity Equation", Tellus, 2: 237–254, doi: 10.3402/tellusa.v2i4.8607
  4. Smith, G. D. (1985), Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed., pp. 67–68
  5. in this case, using the FTCS discretization scheme
  6. Anderson, J. D. Jr. (1994). Computational Fluid Dynamics: The Basics with Applications. McGraw Hill.