In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.
This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing: [1]
Derivative | Accuracy | −5 | −4 | −3 | −2 | −1 | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | −1/2 | 0 | 1/2 | ||||||||
4 | 1/12 | −2/3 | 0 | 2/3 | −1/12 | |||||||
6 | −1/60 | 3/20 | −3/4 | 0 | 3/4 | −3/20 | 1/60 | |||||
8 | 1/280 | −4/105 | 1/5 | −4/5 | 0 | 4/5 | −1/5 | 4/105 | −1/280 | |||
2 | 2 | 1 | −2 | 1 | ||||||||
4 | −1/12 | 4/3 | −5/2 | 4/3 | −1/12 | |||||||
6 | 1/90 | −3/20 | 3/2 | −49/18 | 3/2 | −3/20 | 1/90 | |||||
8 | −1/560 | 8/315 | −1/5 | 8/5 | −205/72 | 8/5 | −1/5 | 8/315 | −1/560 | |||
3 | 2 | −1/2 | 1 | 0 | −1 | 1/2 | ||||||
4 | 1/8 | −1 | 13/8 | 0 | −13/8 | 1 | −1/8 | |||||
6 | −7/240 | 3/10 | −169/120 | 61/30 | 0 | −61/30 | 169/120 | −3/10 | 7/240 | |||
4 | 2 | 1 | −4 | 6 | −4 | 1 | ||||||
4 | −1/6 | 2 | −13/2 | 28/3 | −13/2 | 2 | −1/6 | |||||
6 | 7/240 | −2/5 | 169/60 | −122/15 | 91/8 | −122/15 | 169/60 | −2/5 | 7/240 | |||
5 | 2 | −1/2 | 2 | −5/2 | 0 | 5/2 | −2 | 1/2 | ||||
4 | 1/6 | −3/2 | 13/3 | −29/6 | 0 | 29/6 | −13/3 | 3/2 | −1/6 | |||
6 | −13/288 | 19/36 | −87/32 | 13/2 | −323/48 | 0 | 323/48 | −13/2 | 87/32 | −19/36 | 13/288 | |
6 | 2 | 1 | −6 | 15 | −20 | 15 | −6 | 1 | ||||
4 | −1/4 | 3 | −13 | 29 | −75/2 | 29 | −13 | 3 | −1/4 | |||
6 | 13/240 | −19/24 | 87/16 | −39/2 | 323/8 | −1023/20 | 323/8 | −39/2 | 87/16 | −19/24 | 13/240 |
For example, the third derivative with a second-order accuracy is
where represents a uniform grid spacing between each finite difference interval, and .
For the -th derivative with accuracy , there are central coefficients . These are given by the solution of the linear equation system
where the only non-zero value on the right hand side is in the -th row.
An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available. [2]
Given that the left-hand side matrix is a transposed Vandermonde matrix, a rearrangement reveals that the coefficients are basically computed by fitting and deriving a -th order polynomial to a window of points. Consequently, the coefficients can also be computed as the -th order derivative of a fully determined Savitzky–Golay filter with polynomial degree and a window size of . For this, open source implementations are also available. [3] There are two possible definitions which differ in the ordering of the coefficients: a filter for filtering via discrete convolution or via a matrix-vector-product. The coefficients given in the table above correspond to the latter definition.
The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients. [4] For the first six derivatives we have the following:
Derivative | ||
---|---|---|
1 | ||
2 | ||
3 | ||
4 | ||
5 | ||
6 |
where are generalized harmonic numbers.
This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing: [1]
Derivative | Accuracy | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | −1 | 1 | |||||||
2 | −3/2 | 2 | −1/2 | |||||||
3 | −11/6 | 3 | −3/2 | 1/3 | ||||||
4 | −25/12 | 4 | −3 | 4/3 | −1/4 | |||||
5 | −137/60 | 5 | −5 | 10/3 | −5/4 | 1/5 | ||||
6 | −49/20 | 6 | −15/2 | 20/3 | −15/4 | 6/5 | −1/6 | |||
2 | 1 | 1 | −2 | 1 | ||||||
2 | 2 | −5 | 4 | −1 | ||||||
3 | 35/12 | −26/3 | 19/2 | −14/3 | 11/12 | |||||
4 | 15/4 | −77/6 | 107/6 | −13 | 61/12 | −5/6 | ||||
5 | 203/45 | −87/5 | 117/4 | −254/9 | 33/2 | −27/5 | 137/180 | |||
6 | 469/90 | −223/10 | 879/20 | −949/18 | 41 | −201/10 | 1019/180 | −7/10 | ||
3 | 1 | −1 | 3 | −3 | 1 | |||||
2 | −5/2 | 9 | −12 | 7 | −3/2 | |||||
3 | −17/4 | 71/4 | −59/2 | 49/2 | −41/4 | 7/4 | ||||
4 | −49/8 | 29 | −461/8 | 62 | −307/8 | 13 | −15/8 | |||
5 | −967/120 | 638/15 | −3929/40 | 389/3 | −2545/24 | 268/5 | −1849/120 | 29/15 | ||
6 | −801/80 | 349/6 | −18353/120 | 2391/10 | −1457/6 | 4891/30 | −561/8 | 527/30 | −469/240 | |
4 | 1 | 1 | −4 | 6 | −4 | 1 | ||||
2 | 3 | −14 | 26 | −24 | 11 | −2 | ||||
3 | 35/6 | −31 | 137/2 | −242/3 | 107/2 | −19 | 17/6 | |||
4 | 28/3 | −111/2 | 142 | −1219/6 | 176 | −185/2 | 82/3 | −7/2 | ||
5 | 1069/80 | −1316/15 | 15289/60 | −2144/5 | 10993/24 | −4772/15 | 2803/20 | −536/15 | 967/240 |
For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are
while the corresponding backward approximations are given by
To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same. The following table illustrates this: [5]
Derivative | Accuracy | −8 | −7 | −6 | −5 | −4 | −3 | −2 | −1 | 0 |
---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | −1 | 1 | |||||||
2 | 1/2 | −2 | 3/2 | |||||||
3 | −1/3 | 3/2 | −3 | 11/6 | ||||||
2 | 1 | 1 | −2 | 1 | ||||||
2 | −1 | 4 | −5 | 2 | ||||||
3 | 1 | −1 | 3 | −3 | 1 | |||||
2 | 3/2 | −7 | 12 | −9 | 5/2 | |||||
4 | 1 | 1 | −4 | 6 | −4 | 1 | ||||
2 | −2 | 11 | −24 | 26 | −14 | 3 |
For arbitrary stencil points and any derivative of order up to one less than the number of stencil points, the finite difference coefficients can be obtained by solving the linear equations [6]
where is the Kronecker delta, equal to one if , and zero otherwise.
Example, for , order of differentiation :
The order of accuracy of the approximation takes the usual form [ citation needed ].
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.
In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761).
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.
In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.
In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial of degree such that only certain derivatives have specified values at specified points:
A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data.
In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis.
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables and use a discrete cosine transform (DCT) approximation for the cosine series. Besides having fast-converging accuracy comparable to Gaussian quadrature rules, Clenshaw–Curtis quadrature naturally leads to nested quadrature rules, which is important for both adaptive quadrature and multidimensional quadrature (cubature).
In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.
The rate of a chemical reaction is influenced by many different factors, such as temperature, pH, reactant, and product concentrations and other effectors. The degree to which these factors change the reaction rate is described by the elasticity coefficient. This coefficient is defined as follows:
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.