Differential of a function

Last updated

In calculus, the differential represents the principal part of the change in a function with respect to changes in the independent variable. The differential is defined by

Contents

where is the derivative of f with respect to , and is an additional real variable (so that is a function of and ). The notation is such that the equation

holds, where the derivative is represented in the Leibniz notation , and this is consistent with regarding the derivative as the quotient of the differentials. One also writes

The precise meaning of the variables and depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables and are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis.

History and usage

The differential was first introduced via an intuitive or heuristic definition by Isaac Newton and furthered by Gottfried Leibniz, who thought of the differential  as an infinitely small (or infinitesimal) change in the value  of the function, corresponding to an infinitely small change  in the function's argument . For that reason, the instantaneous rate of change of with respect to , which is the value of the derivative of the function, is denoted by the fraction

in what is called the Leibniz notation for derivatives. The quotient is not infinitely small; rather it is a real number.

The use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy (1823) defined the differential without appeal to the atomism of Leibniz's infinitesimals. [1] [2] Instead, Cauchy, following d'Alembert, inverted the logical order of Leibniz and his successors: the derivative itself became the fundamental object, defined as a limit of difference quotients, and the differentials were then defined in terms of it. That is, one was free to define the differential by an expression

in which and are simply new variables taking finite real values, [3] not fixed infinitesimals as they had been for Leibniz. [4]

According to Boyer (1959 , p. 12), Cauchy's approach was a significant logical improvement over the infinitesimal approach of Leibniz because, instead of invoking the metaphysical notion of infinitesimals, the quantities and could now be manipulated in exactly the same manner as any other real quantities in a meaningful way. Cauchy's overall conceptual approach to differentials remains the standard one in modern analytical treatments, [5] although the final word on rigor, a fully modern notion of the limit, was ultimately due to Karl Weierstrass. [6]

In physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John (1999 , p. 184) reconcile the physical use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the particular purpose for which they are intended. Thus "physical infinitesimals" need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense.

Following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is more desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a linear functional of an increment . This approach allows the differential (as a linear map) to be developed for a variety of more sophisticated spaces, ultimately giving rise to such notions as the Fréchet or Gateaux derivative. Likewise, in differential geometry, the differential of a function at a point is a linear function of a tangent vector (an "infinitely small displacement"), which exhibits it as a kind of one-form: the exterior derivative of the function. In non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing (see differential (infinitesimal)).

Definition

The differential of a function
f
(
x
)
{\displaystyle f(x)}
at a point
x
0
{\displaystyle x_{0}}
. Sentido geometrico del diferencial de una funcion.png
The differential of a function at a point .

The differential is defined in modern treatments of differential calculus as follows. [7] The differential of a function of a single real variable is the function of two independent real variables and given by

One or both of the arguments may be suppressed, i.e., one may see or simply . If , the differential may also be written as . Since , it is conventional to write so that the following equality holds:

This notion of differential is broadly applicable when a linear approximation to a function is sought, in which the value of the increment is small enough. More precisely, if is a differentiable function at , then the difference in -values

satisfies

where the error in the approximation satisfies as . In other words, one has the approximate identity

in which the error can be made as small as desired relative to by constraining to be sufficiently small; that is to say,

as . For this reason, the differential of a function is known as the principal (linear) part in the increment of a function: the differential is a linear function of the increment , and although the error may be nonlinear, it tends to zero rapidly as tends to zero.

Differentials in several variables

Operator / Function
Differential1: 2:

3:

Partial derivative
Total derivative

Following Goursat (1904 , I, §15), for functions of more than one independent variable,

the partial differential of y with respect to any one of the variables x1 is the principal part of the change in y resulting from a change dx1 in that one variable. The partial differential is therefore

involving the partial derivative of y with respect to x1. The sum of the partial differentials with respect to all of the independent variables is the total differential

which is the principal part of the change in y resulting from changes in the independent variables xi.

More precisely, in the context of multivariable calculus, following Courant (1937b), if f is a differentiable function, then by the definition of differentiability, the increment

where the error terms ε i tend to zero as the increments Δxi jointly tend to zero. The total differential is then rigorously defined as

Since, with this definition,

one has

As in the case of one variable, the approximate identity holds

in which the total error can be made as small as desired relative to by confining attention to sufficiently small increments.

Application of the total differential to error estimation

In measurement, the total differential is used in estimating the error of a function based on the errors of the parameters . Assuming that the interval is short enough for the change to be approximately linear:

and that all variables are independent, then for all variables,

This is because the derivative with respect to the particular parameter gives the sensitivity of the function to a change in , in particular the error . As they are assumed to be independent, the analysis describes the worst-case scenario. The absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign. From this principle the error rules of summation, multiplication etc. are derived, e.g.:

Let ;
; evaluating the derivatives
Δf = bΔa + aΔb; dividing by f, which is a × b
Δf/f = Δa/a + Δb/b

That is to say, in multiplication, the total relative error is the sum of the relative errors of the parameters.

To illustrate how this depends on the function considered, consider the case where the function is instead. Then, it can be computed that the error estimate is

Δf/f = Δa/a + Δb/(b ln b)

with an extra 'ln b' factor not found in the case of a simple product. This additional factor tends to make the error smaller, as ln b is not as large as a bare b.

Higher-order differentials

Higher-order differentials of a function y = f(x) of a single variable x can be defined via: [8]

and, in general,

Informally, this motivates Leibniz's notation for higher-order derivatives

When the independent variable x itself is permitted to depend on other variables, then the expression becomes more complicated, as it must include also higher order differentials in x itself. Thus, for instance,

and so forth.

Similar considerations apply to defining higher order differentials of functions of several variables. For example, if f is a function of two variables x and y, then

where is a binomial coefficient. In more variables, an analogous expression holds, but with an appropriate multinomial expansion rather than binomial expansion. [9]

Higher order differentials in several variables also become more complicated when the independent variables are themselves allowed to depend on other variables. For instance, for a function f of x and y which are allowed to depend on auxiliary variables, one has

Because of this notational infelicity, the use of higher order differentials was roundly criticized by Hadamard 1935, who concluded:

Enfin, que signifie ou que représente l'égalité
A mon avis, rien du tout.

That is: Finally, what is meant, or represented, by the equality [...]? In my opinion, nothing at all. In spite of this skepticism, higher order differentials did emerge as an important tool in analysis. [10]

In these contexts, the nth order differential of the function f applied to an increment Δx is defined by

or an equivalent expression, such as

where is an nth forward difference with increment tΔx.

This definition makes sense as well if f is a function of several variables (for simplicity taken here as a vector argument). Then the nth differential defined in this way is a homogeneous function of degree n in the vector increment Δx. Furthermore, the Taylor series of f at the point x is given by

The higher order Gateaux derivative generalizes these considerations to infinite dimensional spaces.

Properties

A number of properties of the differential follow in a straightforward manner from the corresponding properties of the derivative, partial derivative, and total derivative. These include: [11]

An operation d with these two properties is known in abstract algebra as a derivation. They imply the Power rule

In addition, various forms of the chain rule hold, in increasing level of generality: [12]

Heuristically, the chain rule for several variables can itself be understood by dividing through both sides of this equation by the infinitely small quantity dt.

General formulation

A consistent notion of differential can be developed for a function f : RnRm between two Euclidean spaces. Let xx  Rn be a pair of Euclidean vectors. The increment in the function f is

If there exists an m × n matrix A such that

in which the vector ε  0 as Δx  0, then f is by definition differentiable at the point x. The matrix A is sometimes known as the Jacobian matrix, and the linear transformation that associates to the increment Δx  Rn the vector AΔx  Rm is, in this general setting, known as the differential df(x) of f at the point x. This is precisely the Fréchet derivative, and the same construction can be made to work for a function between any Banach spaces.

Another fruitful point of view is to define the differential directly as a kind of directional derivative:

which is the approach already taken for defining higher order differentials (and is most nearly the definition set forth by Cauchy). If t represents time and x position, then h represents a velocity instead of a displacement as we have heretofore regarded it. This yields yet another refinement of the notion of differential: that it should be a linear function of a kinematic velocity. The set of all velocities through a given point of space is known as the tangent space, and so df gives a linear function on the tangent space: a differential form. With this interpretation, the differential of f is known as the exterior derivative, and has broad application in differential geometry because the notion of velocities and the tangent space makes sense on any differentiable manifold. If, in addition, the output value of f also represents a position (in a Euclidean space), then a dimensional analysis confirms that the output value of df must be a velocity. If one treats the differential in this manner, then it is known as the pushforward since it "pushes" velocities from a source space into velocities in a target space.

Other approaches

Although the notion of having an infinitesimal increment dx is not well-defined in modern mathematical analysis, a variety of techniques exist for defining the infinitesimal differential so that the differential of a function can be handled in a manner that does not clash with the Leibniz notation. These include:

Examples and applications

Differentials may be effectively used in numerical analysis to study the propagation of experimental errors in a calculation, and thus the overall numerical stability of a problem ( Courant 1937a ). Suppose that the variable x represents the outcome of an experiment and y is the result of a numerical computation applied to x. The question is to what extent errors in the measurement of x influence the outcome of the computation of y. If the x is known to within Δx of its true value, then Taylor's theorem gives the following estimate on the error Δy in the computation of y:

where ξ = x + θΔx for some 0 < θ < 1. If Δx is small, then the second order term is negligible, so that Δy is, for practical purposes, well-approximated by dy = f'(xx.

The differential is often useful to rewrite a differential equation

in the form

in particular when one wants to separate the variables.

Notes

  1. For a detailed historical account of the differential, see Boyer 1959, especially page 275 for Cauchy's contribution on the subject. An abbreviated account appears in Kline 1972 , Chapter 40.
  2. Cauchy explicitly denied the possibility of actual infinitesimal and infinite quantities ( Boyer 1959 , pp. 273–275), and took the radically different point of view that "a variable quantity becomes infinitely small when its numerical value decreases indefinitely in such a way as to converge to zero" (Cauchy 1823 , p. 12; translation from Boyer 1959 , p. 273).
  3. Boyer 1959 , p. 275
  4. Boyer 1959 , p. 12: "The differentials as thus defined are only new variables, and not fixed infinitesimals..."
  5. Courant 1937a , II, §9: "Here we remark merely in passing that it is possible to use this approximate representation of the increment by the linear expression to construct a logically satisfactory definition of a "differential", as was done by Cauchy in particular."
  6. Boyer 1959 , p. 284
  7. See, for instance, the influential treatises of Courant 1937a, Kline 1977, Goursat 1904, and Hardy 1908. Tertiary sources for this definition include also Tolstov 2001 and Itô 1993 , §106.
  8. Cauchy 1823. See also, for instance, Goursat 1904 , I, §14.
  9. Goursat 1904 , I, §14
  10. In particular to infinite dimensional holomorphy ( Hille & Phillips 1974 ) and numerical analysis via the calculus of finite differences.
  11. Goursat 1904 , I, §17
  12. Goursat 1904 , I, §§14,16
  13. Eisenbud & Harris 1998.
  14. See Kock 2006 and Moerdijk & Reyes 1991.
  15. See Robinson 1996 and Keisler 1986.

See also

Related Research Articles

In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if is the function such that for every x, then the chain rule is, in Lagrange's notation,

<span class="mw-page-title-main">Cauchy–Riemann equations</span> Conditions required of holomorphic (complex differentiable) functions

In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential equations which, together with certain continuity and differentiability criteria, form a necessary and sufficient condition for a complex function to be holomorphic. This system of equations first appeared in the work of Jean le Rond d'Alembert. Later, Leonhard Euler connected this system to the analytic functions. Cauchy then used these equations to construct his theory of functions. Riemann's dissertation on the theory of functions appeared in 1851.

<span class="mw-page-title-main">Derivative</span> Instantaneous rate of change (mathematics)

In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.

<span class="mw-page-title-main">Laplace's equation</span> Second order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematics, the Dirac delta distribution, also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.

<span class="mw-page-title-main">Probability density function</span> Function whose integral over a region describes the probability of an event occurring in that region

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.

<span class="mw-page-title-main">Differential calculus</span> Area of mathematics; subarea of calculus

In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

<span class="mw-page-title-main">Product rule</span> Formula for the derivative of a product

In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Leibniz's notation</span> Mathematical notation used for calculus

In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to represent infinitely small increments of x and y, respectively, just as Δx and Δy represent finite increments of x and y, respectively.

In mathematics, the symmetry of second derivatives refers to the possibility of interchanging the order of taking partial derivatives of a function

In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v.

In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.

In mathematics, the total derivative of a function f at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when f is a function of several variables, because when f is a function of a single variable, the total derivative is the same as the ordinary derivative of the function.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form

<span class="mw-page-title-main">Second derivative</span> Mathematical operation

In calculus, the second derivative, or the second order derivative, of a function f is the derivative of the derivative of f. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:

The triple product rule, known variously as the cyclic chain rule, cyclic relation, cyclical rule or Euler's chain rule, is a formula which relates partial derivatives of three interdependent variables. The rule finds application in thermodynamics, where frequently three variables can be related by a function of the form f(x, y, z) = 0, so each variable is given as an implicit function of the other two variables. For example, an equation of state for a fluid relates temperature, pressure, and volume in this manner. The triple product rule for such interrelated variables x, y, and z comes from using a reciprocity relation on the result of the implicit function theorem, and is given by

In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

References