Finite difference coefficient

Last updated

In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.

Contents

Central finite difference

This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing: [1]

DerivativeAccuracy−5−4−3−2−1012345
12−1/201/2
41/12−2/302/3−1/12
6−1/603/20−3/403/4−3/201/60
81/280−4/1051/5−4/504/5−1/54/105−1/280
221−21
4−1/124/3−5/24/3−1/12
61/90−3/203/2−49/183/2−3/201/90
8−1/5608/315−1/58/5−205/728/5−1/58/315−1/560
32−1/210−11/2
41/8−113/80−13/81−1/8
6−7/2403/10−169/12061/300−61/30169/120−3/107/240
421−46−41
4−1/62−13/228/3−13/22−1/6
67/240−2/5169/60−122/1591/8−122/15169/60−2/57/240
52−1/22−5/205/2−21/2
41/6−3/213/3−29/6029/6−13/33/2−1/6
6−13/28819/36−87/3213/2−323/480323/48−13/287/32−19/3613/288
621−615−2015−61
4−1/43−1329−75/229−133−1/4
613/240−19/2487/16−39/2323/8−1023/20323/8−39/287/16−19/2413/240

For example, the third derivative with a second-order accuracy is

where represents a uniform grid spacing between each finite difference interval, and .

For the -th derivative with accuracy , there are central coefficients . These are given by the solution of the linear equation system

where the only non-zero value on the right hand side is in the -th row.

An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available. [2]
Given that the left-hand side matrix is a transposed Vandermonde matrix, a rearrangement reveals that the coefficients are basically computed by fitting and deriving a -th order polynomial to a window of points. Consequently, the coefficients can also be computed as the -th order derivative of a fully determined Savitzky–Golay filter with polynomial degree and a window size of . For this, open source implementations are also available. [3] There are two possible definitions which differ in the ordering of the coefficients: a filter for filtering via discrete convolution or via a matrix-vector-product. The coefficients given in the table above correspond to the latter definition.

The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients. [4] For the first six derivatives we have the following:

Derivative
1
2
3
4
5
6

where are generalized harmonic numbers.

Forward finite difference

This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing: [1]

DerivativeAccuracy012345678
1111       
23/221/2      
311/633/21/3     
425/12434/31/4    
5137/605510/35/41/5   
649/20615/220/315/46/51/6  
21121      
22541     
335/1226/319/214/311/12    
415/477/6107/61361/125/6   
5203/4587/5117/4254/933/227/5137/180  
6469/90223/10879/20949/1841201/101019/1807/10 
311331     
25/291273/2    
317/471/459/249/241/47/4   
449/829461/862307/81315/8  
5967/120638/153929/40389/32545/24268/51849/12029/15 
6801/80349/618353/1202391/101457/64891/30561/8527/30469/240
4114641    
23142624112   
335/631137/2242/3107/21917/6  
428/3111/21421219/6176185/282/37/2 
51069/801316/1515289/602144/510993/244772/152803/20536/15967/240

For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are

while the corresponding backward approximations are given by

Backward finite difference

To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same. The following table illustrates this: [5]

DerivativeAccuracy876543210
11       11
2      1/223/2
3     1/33/2311/6
21      121
2     1452
31     1331
2    3/271295/2
41    14641
2   2112426143

Arbitrary stencil points

For a given arbitrary stencil points of length with the order of derivatives , the finite difference coefficients can be obtained by solving the linear equations [6]

where is the Kronecker delta, equal to one if , and zero otherwise.

Example, for , order of differentiation :

The order of accuracy of the approximation takes the usual form [ citation needed ].

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by ba, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.

<span class="mw-page-title-main">Simpson's rule</span> Method for numerical integration

In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761).

A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.

In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.

<span class="mw-page-title-main">Numerical differentiation</span> Use of numerical analysis to estimate derivatives of functions

In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.

In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial of degree such that only certain derivatives have specified values at specified points:

In mathematics, a hyperbolic partial differential equation of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is

<span class="mw-page-title-main">Savitzky–Golay filter</span> Algorithm to smooth data points

A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data.

<span class="mw-page-title-main">Euler method</span> Approach to finding numerical solutions of ordinary differential equations

In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis.

In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.

Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables and use a discrete cosine transform (DCT) approximation for the cosine series. Besides having fast-converging accuracy comparable to Gaussian quadrature rules, Clenshaw–Curtis quadrature naturally leads to nested quadrature rules, which is important for both adaptive quadrature and multidimensional quadrature (cubature).

The rate of a chemical reaction is influenced by many different factors, such as temperature, pH, reactant, and product concentrations and other effectors. The degree to which these factors change the reaction rate is described by the elasticity coefficient. This coefficient is defined as follows:

<span class="mw-page-title-main">Ordinary differential equation</span> Differential equation containing derivatives with respect to only one variable

In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.

References

  1. 1 2 Fornberg, Bengt (1988), "Generation of Finite Difference Formulas on Arbitrarily Spaced Grids", Mathematics of Computation , 51 (184): 699–706, doi: 10.1090/S0025-5718-1988-0935077-0 , ISSN   0025-5718 .
  2. "A Python package for finite difference numerical derivatives in arbitrary number of dimensions". GitHub . 14 October 2021.
  3. "scipy.signal.savgol_filter". Scipy Online documentation. 2008–2024.
  4. "Finite differences coefficients". StackExchange . 5 June 2023.
  5. Taylor, Cameron (12 December 2019). "Finite Difference Coefficients Calculator". MIT.
  6. "Finite Difference Coefficients Calculator".