In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.
The simplest method is to use finite difference approximations.
A simple two-point estimation is to compute the slope of a nearby secant line through the points (x, f(x)) and (x + h, f(x + h)). [1] Choosing a small number h, h represents a small change in x, and it can be either positive or negative. The slope of this line is This expression is Newton's difference quotient (also known as a first-order divided difference).
The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to h. As h approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of f at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line:
Since immediately substituting 0 for h results in indeterminate form, calculating the derivative directly can be unintuitive.
Equivalently, the slope could be estimated by employing positions x − h and x.
Another two-point formula is to compute the slope of a nearby secant line through the points (x − h, f(x − h)) and (x + h, f(x + h)). The slope of this line is
This formula is known as the symmetric difference quotient. In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to . Hence for small values of h this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at x, the value of the function at x is not involved.
The estimation error is given by where is some point between and . This error does not include the rounding error due to numbers being represented and calculations being performed in limited precision.
The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including TI-82, TI-83, TI-84, TI-85, all of which use this method with h = 0.001. [2] [3]
An important consideration in practice when the function is calculated using floating-point arithmetic of finite precision is the choice of step size, h. If chosen too small, the subtraction will yield a large rounding error. In fact, all the finite-difference formulae are ill-conditioned [4] and due to cancellation will produce a value of zero if h is small enough. [5] If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse. [6]
For basic central differences, the optimal step is the cube-root of machine epsilon. [7] For the numerical derivative formula evaluated at x and x + h, a choice for h that is small without producing a large rounding error is (though not when x = 0), where the machine epsilon ε is typically of the order of 2.2×10−16 for double precision. [8] A formula for h that balances the rounding error against the secant error for optimum accuracy is [9] (though not when ), and to employ it will require knowledge of the function.
For computer calculations the problems are exacerbated because, although x necessarily holds a representable floating-point number in some precision (32 or 64-bit, etc.), x + h almost certainly will not be exactly representable in that precision. This means that x + h will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that (x + h) − x will not equal h; the two function evaluations will not be exactly h apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as h = 0.1 will not be a round number in binary; it is 0.000110011001100...2 A possible approach is as follows:
h := sqrt(eps) * x; xph := x + h; dx := xph - x; slope := (F(xph) - F(x)) / dx;
However, with computers, compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that dx and h are the same. With C and similar languages, a directive that xph is a volatile variable will prevent this.
Higher-order methods for approximating the derivative, as well as methods for higher derivatives, exist.
Given below is the five-point method for the first derivative (five-point stencil in one dimension): [10] where .
For other stencil configurations and derivative orders, the Finite Difference Coefficients Calculator is a tool that can be used to generate derivative approximation methods for any stencil with any derivative order (provided a solution exists).
Using Newton's difference quotient, the following can be shown [11] (for n > 0):
The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if is a holomorphic function, real-valued on the real line, which can be evaluated at points in the complex plane near , then there are stable methods. For example, [5] the first derivative can be calculated by the complex-step derivative formula: [12] [13] [14]
The recommended step size to obtain accurate derivatives for a range of conditions is . [6] This formula can be obtained by Taylor series expansion:
The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs multicomplex numbers, resulting in multicomplex derivatives. [15] [16] [17] where the denote the multicomplex imaginary units; . The operator extracts the th component of a multicomplex number of level , e.g., extracts the real component and extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative
A C++ implementation of multicomplex arithmetics is available. [18]
In general, derivatives of any order can be calculated using Cauchy's integral formula: [19] where the integration is done numerically.
Using complex variables for numerical differentiation was started by Lyness and Moler in 1967. [20] Their algorithm is applicable to higher-order derivatives.
A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner. [21] An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg. [4]
Differential quadrature is the approximation of derivatives by using weighted sums of function values. [22] [23] Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with quadrature, meaning numerical integration, where weighted sums are used in methods such as Simpson's method or the Trapezoidal rule. There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter. Differential quadrature is used to solve partial differential equations. There are further methods for computing derivatives from noisy data. [24]
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F' = f. The process of solving for antiderivatives is called antidifferentiation, and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G.
In differential geometry, the cotangent space is a vector space associated with a point on a smooth manifold ; one can define a cotangent space for every point on a smooth manifold. Typically, the cotangent space, is defined as the dual space of the tangent space at , , although there are more direct definitions. The elements of the cotangent space are called cotangent vectors or tangent covectors.
The derivative is a fundamental tool of calculus that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
In differential geometry, a subject of mathematics, a symplectic manifold is a smooth manifold, , equipped with a closed nondegenerate differential 2-form , called the symplectic form. The study of symplectic manifolds is called symplectic geometry or symplectic topology. Symplectic manifolds arise naturally in abstract formulations of classical mechanics and analytical mechanics as the cotangent bundles of manifolds. For example, in the Hamiltonian formulation of classical mechanics, which provides one of the major motivations for the field, the set of all possible configurations of a system is modeled as a manifold, and this manifold's cotangent bundle describes the phase space of the system.
In mathematics, the slope or gradient of a line is a number that describes the direction and steepness of the line. Often denoted by the letter m, slope is calculated as the ratio of the vertical change to the horizontal change between two distinct points on the line, giving the same number for any choice of points. A line descending left-to-right has negative rise and negative slope. The line may be physical – as set by a road surveyor, pictorial as in a diagram of a road or roof, or abstract.
In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.
In mathematics, the tangent space of a manifold is a generalization of tangent lines to curves in two-dimensional space and tangent planes to surfaces in three-dimensional space in higher dimensions. In the context of physics the tangent space to a manifold at a point can be viewed as the space of possible velocities for a particle moving on the manifold.
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is tangent to the curve y = f(x) at a point x = c if the line passes through the point (c, f(c)) on the curve and has slope f'(c), where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
In mathematics, the Legendre transformation, first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function.
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.
The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions.
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
In calculus, the differential represents the principal part of the change in a function with respect to changes in the independent variable. The differential is defined by
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
{{cite web}}
: CS1 maint: archived copy as title (link)