Discrete Poisson equation

Last updated

In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics.

Contents

On a two-dimensional rectangular grid

Using the finite difference numerical method to discretize the 2-dimensional Poisson equation (assuming a uniform spatial discretization, ) on an m × n grid gives the following formula: [1]

where and . The preferred arrangement of the solution vector is to use natural ordering which, prior to removing boundary elements, would look like:

This will result in an mn × mn linear system:

where

is the m × m identity matrix, and , also m × m, is given by: [2]

and is defined by

For each equation, the columns of correspond to a block of components in :

while the columns of to the left and right of each correspond to other blocks of components within :

and

respectively.

From the above, it can be inferred that there are block columns of in . It is important to note that prescribed values of (usually lying on the boundary) would have their corresponding elements removed from and . For the common case that all the nodes on the boundary are set, we have and , and the system would have the dimensions (m − 2)(n − 2) × (m− 2)(n − 2), where and would have dimensions (m − 2) × (m − 2).

Example

For a 3×3 ( and ) grid with all the boundary nodes prescribed, the system would look like:

with

and

As can be seen, the boundary 's are brought to the right-hand-side of the equation. [3] The entire system is 9 × 9 while and are 3 × 3 and given by:

and

Methods of solution

Because is block tridiagonal and sparse, many methods of solution have been developed to optimally solve this linear system for . Among the methods are a generalized Thomas algorithm with a resulting computational complexity of , cyclic reduction, successive overrelaxation that has a complexity of , and Fast Fourier transforms which is . An optimal solution can also be computed using multigrid methods. [4]

Poisson convergence of various iterative methods with infinity norms of residuals against iteration count and computer time. Convergence of Iterative Numerical Methods for Poisson System with 16384 elements.svg
Poisson convergence of various iterative methods with infinity norms of residuals against iteration count and computer time.

Applications

In computational fluid dynamics, for the solution of an incompressible flow problem, the incompressibility condition acts as a constraint for the pressure. There is no explicit form available for pressure in this case due to a strong coupling of the velocity and pressure fields. In this condition, by taking the divergence of all terms in the momentum equation, one obtains the pressure poisson equation.

For an incompressible flow this constraint is given by:

where is the velocity in the direction, is velocity in and is the velocity in the direction. Taking divergence of the momentum equation and using the incompressibility constraint, the pressure Poisson equation is formed given by:

where is the kinematic viscosity of the fluid and is the velocity vector. [5]

The discrete Poisson's equation arises in the theory of Markov chains. It appears as the relative value function for the dynamic programming equation in a Markov decision process, and as the control variate for application in simulation variance reduction. [6] [7] [8]

Footnotes

  1. Hoffman, Joe (2001), "Chapter 9. Elliptic partial differential equations", Numerical Methods for Engineers and Scientists (2nd ed.), McGrawHill, ISBN   0-8247-0443-6 .
  2. Golub, Gene H. and C. F. Van Loan, Matrix Computations, 3rd Ed., The Johns Hopkins University Press, Baltimore, 1996, pages 177–180.
  3. Cheny, Ward and David Kincaid, Numerical Mathematics and Computing 2nd Ed., Brooks/Cole Publishing Company, Pacific Grove, 1985, pages 443–448.
  4. CS267: Notes for Lectures 15 and 16, Mar 5 and 7, 1996, https://people.eecs.berkeley.edu/~demmel/cs267/lecture24/lecture24.html
  5. Fletcher, Clive A. J., Computational Techniques for Fluid Dynamics: Vol I, 2nd Ed., Springer-Verlag, Berlin, 1991, page 334–339.
  6. S. P. Meyn and R.L. Tweedie, 2005. Markov Chains and Stochastic Stability. Second edition to appear, Cambridge University Press, 2009.
  7. S. P. Meyn, 2007. Control Techniques for Complex Networks Archived December 16, 2014, at the Wayback Machine , Cambridge University Press, 2007.
  8. Asmussen, Søren, Glynn, Peter W., 2007. "Stochastic Simulation: Algorithms and Analysis". Springer. Series: Stochastic Modelling and Applied Probability, Vol. 57, 2007.

Related Research Articles

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of making two or more vectors perpendicular to each other.

<span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

<span class="mw-page-title-main">Moment of inertia</span> Scalar measure of the rotational inertia with respect to a fixed axis of rotation

The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation by a given amount.

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.

<span class="mw-page-title-main">Hooke's law</span> Physical law: force needed to deform a spring scales linearly with distance

In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.

In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

Screw theory is the algebraic calculation of pairs of vectors, such as angular and linear velocity, or forces and moments, that arise in the kinematics and dynamics of rigid bodies.

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

<span class="mw-page-title-main">Osculating circle</span> Circle of immediate corresponding curvature of a curve at a point

An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus.

<span class="mw-page-title-main">Interval finite element</span>

In numerical analysis, the interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics, composite structures, biomechanics and in many other areas. The goal of the Interval Finite Element is to find upper and lower bounds of different characteristics of the model and use these results in the design process. This is so called worst case design, which is closely related to the limit state design.

In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.

References