In numerical analysis, von Neumann stability analysis (also known as Fourier stability analysis) is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations. [1] The analysis is based on the Fourier decomposition of numerical error and was developed at Los Alamos National Laboratory after having been briefly described in a 1947 article by British researchers John Crank and Phyllis Nicolson. [2] This method is an example of explicit time integration where the function that defines governing equation is evaluated at the current time. Later, the method was given a more rigorous treatment in an article [3] co-authored by John von Neumann.
The stability of numerical schemes is closely associated with numerical error. A finite difference scheme is stable if the errors made at one time step of the calculation do not cause the errors to be magnified as the computations are continued. A neutrally stable scheme is one in which errors remain constant as the computations are carried forward. If the errors decay and eventually damp out, the numerical scheme is said to be stable. If, on the contrary, the errors grow with time the numerical scheme is said to be unstable. The stability of numerical schemes can be investigated by performing von Neumann stability analysis. For time-dependent problems, stability guarantees that the numerical method produces a bounded solution whenever the solution of the exact differential equation is bounded. Stability, in general, can be difficult to investigate, especially when the equation under consideration is nonlinear.
In certain cases, von Neumann stability is necessary and sufficient for stability in the sense of Lax–Richtmyer (as used in the Lax equivalence theorem): The PDE and the finite difference scheme models are linear; the PDE is constant-coefficient with periodic boundary conditions and has only two independent variables; and the scheme uses no more than two time levels. [4] Von Neumann stability is necessary in a much wider variety of cases. It is often used in place of a more detailed stability analysis to provide a good guess at the restrictions (if any) on the step sizes used in the scheme because of its relative simplicity.
The von Neumann method is based on the decomposition of the errors into Fourier series. To illustrate the procedure, consider the one-dimensional heat equation defined on the spatial interval , with the notation where are the specific x values, and are the sequence of t values.
We can discretize the heat equation [5] as
(1) |
where
Then the solution of the discrete equation approximates the analytical solution of the PDE on the grid.
Define the round-off error as where is the solution of the discretized equation ( 1 ) that would be computed in the absence of round-off error, and is the numerical solution obtained in finite precision arithmetic. Since the exact solution must satisfy the discretized equation exactly, the error must also satisfy the discretized equation. [6] Here we assumed that satisfies the equation, too (this is only true in machine precision). Thus
(2) |
is a recurrence relation for the error. Equations ( 1 ) and ( 2 ) show that both the error and the numerical solution have the same growth or decay behavior with respect to time. For linear differential equations with periodic boundary condition, the spatial variation of error may be expanded in a finite Fourier series with respect to , in the interval , as
(3) |
where the wavenumber with and . The time dependence of the error is included by assuming that the amplitude of error is a function of time. Often the assumption is made that the error grows or decays exponentially with time, but this is not necessary for the stability analysis.
If the boundary condition is not periodic, then we may use the finite Fourier integral with respect to :
(4) |
Since the difference equation for error is linear (the behavior of each term of the series is the same as series itself), it is enough to consider the growth of error of a typical term:
(5a) |
if a Fourier series is used or
(5b) |
if a Fourier integral is used.
As the Fourier series can be considered to be a special case of the Fourier integral, we will continue the development using the expressions for the Fourier integral.
The stability characteristics can be studied using just this form for the error with no loss in generality. To find out how error varies in steps of time, substitute equation ( 5b ) into equation ( 2 ), after noting that to yield (after simplification)
(6) |
Introducing and using the identities equation ( 6 ) may be written as
(7) |
Define the amplification factor
(8) |
The necessary and sufficient condition for the error to remain bounded is that Thus, from equations ( 7 ) and ( 8 ), the condition for stability is given by
(9) |
Note that the term is always positive. Thus, to satisfy Equation ( 9 ):
(10) |
For the above condition to hold for all (and therefore all ). The highest value the sinusoidal term can take is 1 and for that particular choice if the upper threshold condition is satisfied, then so will be for all grid points, thus we have
(11) |
Equation ( 11 ) gives the stability requirement for the FTCS scheme as applied to one-dimensional heat equation. It says that for a given , the allowed value of must be small enough to satisfy equation ( 10 ).
Similar analysis shows that a FTCS scheme for linear advection is unconditionally unstable.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.
In physics, the screened Poisson equation is a Poisson equation, which arises in the Klein–Gordon equation, electric field screening in plasmas, and nonlocal granular fluidity in granular flow.
Linear elasticity is a mathematical model of how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
Arc length is the distance between two points along a section of a curve.
The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation:
In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary.
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
Nonstandard finite difference schemes is a general set of methods in numerical analysis that gives numerical solutions to differential equations by constructing a discrete model. The general rules for such schemes are not precisely known.
The Fréedericksz transition is a phase transition in liquid crystals produced when a sufficiently strong electric or magnetic field is applied to a liquid crystal in an undistorted state. Below a certain field threshold the director remains undistorted. As the field value is gradually increased from this threshold, the director begins to twist until it is aligned with the field. In this fashion the Fréedericksz transition can occur in three different configurations known as the twist, bend, and splay geometries. The phase transition was first observed by Fréedericksz and Repiewa in 1927. In this first experiment of theirs, one of the walls of the cell was concave so as to produce a variation in thickness along the cell. The phase transition is named in honor of the Russian physicist Vsevolod Frederiks.
The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.
In quantum computing, the quantum phase estimation algorithm is a quantum algorithm to estimate the phase corresponding to an eigenvalue of a given unitary operator. Because the eigenvalues of a unitary operator always have unit modulus, they are characterized by their phase, and therefore the algorithm can be equivalently described as retrieving either the phase or the eigenvalue itself. The algorithm was initially introduced by Alexei Kitaev in 1995.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
The convection–diffusion equation describes the flow of heat, particles, or other physical quantities in situations where there is both diffusion and convection or advection. For information about the equation, its derivation, and its conceptual importance and consequences, see the main article convection–diffusion equation. This article describes how to use a computer to calculate an approximate numerical solution of the discretized equation, in a time-dependent situation.
In combustion, Frank-Kamenetskii theory explains the thermal explosion of a homogeneous mixture of reactants, kept inside a closed vessel with constant temperature walls. It is named after a Russian scientist David A. Frank-Kamenetskii, who along with Nikolay Semenov developed the theory in the 1930s.