Compact stencil

Last updated
A 2D compact stencil using all 8 adjacent nodes, plus the center node (in red). CompactStencil.svg
A 2D compact stencil using all 8 adjacent nodes, plus the center node (in red).

In mathematics, especially in the areas of numerical analysis called numerical partial differential equations, a compact stencil is a type of stencil that uses only nine nodes for its discretization method in two dimensions. It uses only the center node and the adjacent nodes. For any structured grid utilizing a compact stencil in 1, 2, or 3 dimensions the maximum number of nodes is 3, 9, or 27 respectively. Compact stencils may be compared to non-compact stencils. Compact stencils are currently implemented in many partial differential equation solvers, including several in the topics of CFD, FEA, and other mathematical solvers relating to PDE's. [1] [2]

Mathematics field of study concerning quantity, patterns and change

Mathematics includes the study of such topics as quantity, structure, space, and change.

Numerical analysis study of algorithms that use numerical approximation for the problems of mathematical analysis

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. As an aspect of mathematics and computer science that generates, analyzes, and implements algorithms, the growth in power and the revolution in computing has raised the use of realistic mathematical models in science and engineering, and complex numerical analysis is required to provide solutions to these more involved models of the world. Ordinary differential equations appear in celestial mechanics ; numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

Numerical partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).

Contents

Two Point Stencil Example

The two point stencil for the first derivative of a function is given by:

.


This is obtained from the Taylor series expansion of the first derivative of the function given by:

Taylor series representation of a function

In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point.

.


Replacing with , we have:

.


Addition of the above two equations together results in the cancellation of the terms in odd powers of :

.

.

.


Three Point Stencil Example

For example, the three point stencil for the second derivative of a function is given by:

.


This is obtained from the Taylor series expansion of the first derivative of the function given by:

.


Replacing with , we have:

.


Subtraction of the above two equations results in the cancellation of the terms in even powers of : .

.

.


See also

Stencil (numerical analysis)

In mathematics, especially the areas of numerical analysis concentrating on the numerical solution of partial differential equations, a stencil is a geometric arrangement of a nodal group that relate to the point of interest by using a numerical approximation routine. Stencils are the basis for many algorithms to numerically solve partial differential equations (PDE). Two examples of stencils are the five-point stencil and the Crank–Nicolson method stencil.

Non-compact stencil

In numerical mathematics, a non-compact stencil is a type of discretization method, where any node surrounding the node of interest may be used in the calculation. A non-compact stencil's computational time increases with an increase of layers of nodes used. Non-compact stencils may be compared to Compact stencils.

Five-point stencil

In numerical analysis, given a square grid in one or two dimensions, the five-point stencil of a point in the grid is a stencil made up of the point itself together with its four "neighbors". It is used to write finite difference approximations to derivatives at grid points. It is an example for numerical differentiation.

Related Research Articles

In calculus, the chain rule is a formula for computing the derivative of the composition of two or more functions. That is, if f and g are functions, then the chain rule expresses the derivative of their composition fg in terms of the derivatives of f and g and the product of functions as follows:

In the field of numerical analysis, the condition number of a function with respect to an argument measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem – given one is solving for x, and thus the condition number of the (local) inverse must be used. In linear regression the condition number of the moment matrix can be used as a diagnostic for multicollinearity.

In linear algebra, the determinant is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix A is denoted det(A), det A, or |A|. Geometrically, it can be viewed as the volume scaling factor of the linear transformation described by the matrix. This is also the signed volume of the n-dimensional parallelepiped spanned by the column or row vectors of the matrix. The determinant is positive or negative according to whether the linear mapping preserves or reverses the orientation of n-space.

Tangent straight line touching a point in a curve

In geometry, the tangent line to a plane curve at a given point is the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is said to be a tangent of a curve y = f (x) at a point x = c on the curve if the line passes through the point (c, f ) on the curve and has slope f'(c) where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.

Wave equation Second-order linear differential equation important in physics

The wave equation is an important second-order linear partial differential equation for the description of waves—as they occur in classical physics—such as mechanical waves or light waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics.

Dirac delta function pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function is a generalized function or distribution introduced by the physicist Paul Dirac. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by the theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by ba, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

In mathematics, a Fourier series is a way to represent a function as the sum of simple sine waves. More formally, it decomposes any periodic function or periodic signal into the weighted sum of a set of simple oscillating functions, namely sines and cosines.

Heat equation partial differential equation for distribution of heat in a given region over time

The heat equation is a parabolic partial differential equation that describes the distribution of heat in a given region over time.

Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset.

Runge–Kutta methods family of implicit and explicit iterative methods

In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit iterative methods, which include the well-known routine called the Euler Method, used in temporal discretization for the approximate solutions of ordinary differential equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Martin Kutta.

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential equation defined on a domain, with specified initial conditions or boundary conditions.

Automatic differentiation

In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation, is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations and elementary functions. By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Finite difference method

In mathematics, finite-difference methods (FDM) are numerical methods for solving differential equations by approximating them with difference equations, in which finite differences approximate the derivatives. FDMs are thus discretization methods. FDMs convert a linear (non-linear) ODE/PDE into a system of linear (non-linear) equations, which can then be solved by matrix algebra techniques. The reduction of the differential equation to a system of algebraic equations makes the problem of finding the solution to a given ODE ideally suited to modern computers, hence the widespread use of FDMs in modern numerical analysis.

In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1. Each of these methods is characterized by the number d, which is known as the order of the method. The algorithm is iterative and has a rate of convergence of d + 1.

In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.

References

  1. W. F. Spotz. High-Order Compact Finite Difference Schemes for Computational Mechanics. PhD thesis, University of Texas at Austin, Austin, TX, 1995.
  2. Communications in Numerical Methods in Engineering, Copyright © 2008 John Wiley & Sons, Ltd.