Difference quotient

Last updated

In single-variable calculus, the difference quotient is usually the name for the expression

Contents

which when taken to the limit as h approaches 0 gives the derivative of the function f. [1] [2] [3] [4] The name of the expression stems from the fact that it is the quotient of the difference of values of the function by the difference of the corresponding values of its argument (the latter is (x + h) - x = h in this case). [5] [6] The difference quotient is a measure of the average rate of change of the function over an interval (in this case, an interval of length h). [7] [8] :237 [9] The limit of the difference quotient (i.e., the derivative) is thus the instantaneous rate of change. [9]

By a slight change in notation (and viewpoint), for an interval [a, b], the difference quotient

is called [5] the mean (or average) value of the derivative of f over the interval [a, b]. This name is justified by the mean value theorem, which states that for a differentiable function f, its derivative f′ reaches its mean value at some point in the interval. [5] Geometrically, this difference quotient measures the slope of the secant line passing through the points with coordinates (a, f(a)) and (b, f(b)). [10]

Difference quotients are used as approximations in numerical differentiation, [8] but they have also been subject of criticism in this application. [11]

The difference quotient is sometimes also called the Newton quotient [10] [12] [13] [14] (after Isaac Newton) or Fermat's difference quotient (after Pierre de Fermat). [15]

Overview

The typical notion of the difference quotient discussed above is a particular case of a more general concept. The primary vehicle of calculus and other higher mathematics is the function. Its "input value" is its argument, usually a point ("P") expressible on a graph. The difference between two points, themselves, is known as their DeltaP), as is the difference in their function result, the particular notation being determined by the direction of formation:

The general preference is the forward orientation, as F(P) is the base, to which differences (i.e., "ΔP"s) are added to it. Furthermore,

The function difference divided by the point difference is known as "difference quotient":

If ΔP is infinitesimal, then the difference quotient is a derivative , otherwise it is a divided difference :

Defining the point range

Regardless if ΔP is infinitesimal or finite, there is (at least—in the case of the derivative—theoretically) a point range, where the boundaries are P ± (0.5) ΔP (depending on the orientation—ΔF(P), δF(P) or ∇F(P)):

LB = Lower Boundary;   UB = Upper Boundary;

Derivatives can be regarded as functions themselves, harboring their own derivatives. Thus each function is home to sequential degrees ("higher orders") of derivation, or differentiation. This property can be generalized to all difference quotients.
As this sequencing requires a corresponding boundary splintering, it is practical to break up the point range into smaller, equi-sized sections, with each section being marked by an intermediary point (Pi), where LB = P0 and UB = Pń, the nth point, equaling the degree/order:

  LB =  P0  = P0 + 0Δ1P     = Pń − (Ń-0)Δ1P;         P1  = P0 + 1Δ1P     = Pń − (Ń-1)Δ1P;         P2  = P0 + 2Δ1P     = Pń − (Ń-2)Δ1P;         P3  = P0 + 3Δ1P     = Pń − (Ń-3)Δ1P;             ↓      ↓        ↓       ↓        Pń-3 = P0 + (Ń-3)Δ1P = Pń − 3Δ1P;        Pń-2 = P0 + (Ń-2)Δ1P = Pń − 2Δ1P;        Pń-1 = P0 + (Ń-1)Δ1P = Pń − 1Δ1P;   UB = Pń-0 = P0 + (Ń-0)Δ1P = Pń − 0Δ1P = Pń;
  ΔP = Δ1P = P1 − P0 = P2 − P1 = P3 − P2 = ... = Pń − Pń-1;
  ΔB = UB − LB = Pń − P0 = ΔńP = ŃΔ1P.

The primary difference quotient (Ń = 1)

As a derivative

The difference quotient as a derivative needs no explanation, other than to point out that, since P0 essentially equals P1 = P2 = ... = Pń (as the differences are infinitesimal), the Leibniz notation and derivative expressions do not distinguish P to P0 or Pń:

There are other derivative notations, but these are the most recognized, standard designations.

As a divided difference

A divided difference, however, does require further elucidation, as it equals the average derivative between and including LB and UB:
In this interpretation, Pã represents a function extracted, average value of P (midrange, but usually not exactly midpoint), the particular valuation depending on the function averaging it is extracted from. More formally, Pã is found in the mean value theorem of calculus, which says:
For any function that is continuous on [LB,UB] and differentiable on (LB,UB) there exists some Pã in the interval (LB,UB) such that the secant joining the endpoints of the interval [LB,UB] is parallel to the tangent at Pã.
Essentially, Pã denotes some value of P between LB and UB—hence,
which links the mean value result with the divided difference:
As there is, by its very definition, a tangible difference between LB/P0 and UB/Pń, the Leibniz and derivative expressions do require divarication of the function argument.

Higher-order difference quotients

Second order

Third order

Nth order

Applying the divided difference

The quintessential application of the divided difference is in the presentation of the definite integral, which is nothing more than a finite difference:

Given that the mean value, derivative expression form provides all of the same information as the classical integral notation, the mean value form may be the preferable expression, such as in writing venues that only support/accept standard ASCII text, or in cases that only require the average derivative (such as when finding the average radius in an elliptic integral). This is especially true for definite integrals that technically have (e.g.) 0 and either or as boundaries, with the same divided difference found as that with boundaries of 0 and (thus requiring less averaging effort):

This also becomes particularly useful when dealing with iterated and multiple integrals (ΔA = AU − AL, ΔB = BU − BL, ΔC = CU − CL):

Hence,

and

See also

Related Research Articles

In calculus, the chain rule is a formula to compute the derivative of a composite function. That is, if f and g are differentiable functions, then the chain rule expresses the derivative of their composite fg — the function which maps x to — in terms of the derivatives of f and g and the product of functions as follows:

Derivative Operation in calculus

In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.

Dirac delta function Pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is called a function, although it is not a function on the level one would expect, that is, it is not a function RC, but a function on the space of test functions. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by ba, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

Product rule Formula for the derivative of a product

In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a Functional to a change in a function on which the functional depends.

In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.

In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form

In mathematics, the Fubini–Study metric is a Kähler metric on projective Hilbert space, that is, on a complex projective space CPn endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and Eduard Study.

In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).

Scale analysis is a powerful tool used in the mathematical sciences for the simplification of equations with many terms. First the approximate magnitude of individual terms in the equations is determined. Then some negligibly small terms may be ignored.

In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten.

The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function.

The Thomas–Fermi (TF) model, named after Llewellyn Thomas and Enrico Fermi, is a quantum mechanical theory for the electronic structure of many-body systems developed semiclassically shortly after the introduction of the Schrödinger equation. It stands separate from wave function theory as being formulated in terms of the electronic density alone and as such is viewed as a precursor to modern density functional theory. The Thomas–Fermi model is correct only in the limit of an infinite nuclear charge. Using the approximation for realistic systems yields poor quantitative predictions, even failing to reproduce some general features of the density such as shell structure in atoms and Friedel oscillations in solids. It has, however, found modern applications in many fields through the ability to extract qualitative trends analytically and with the ease at which the model can be solved. The kinetic energy expression of Thomas–Fermi theory is also used as a component in more sophisticated density approximation to the kinetic energy within modern orbital-free density functional theory.

In numerical analysis, von Neumann stability analysis is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations. The analysis is based on the Fourier decomposition of numerical error and was developed at Los Alamos National Laboratory after having been briefly described in a 1947 article by British researchers Crank and Nicolson. This method is an example of explicit time integration where the function that defines governing equation is evaluated at the current time. Later, the method was given a more rigorous treatment in an article co-authored by John von Neumann.

ΔP is a mathematical term symbolizing a change (Δ) in pressure (P).

In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.

Discretization of the Navier–Stokes equations is a reformulation of the equations in such a way that they can be applied to computational fluid dynamics. Several methods of discretization can be applied.

In solid mechanics, the linear stability analysis of an elastic solution is studied using the method of incremental deformations superposed on finite deformations. The method of incremental deformation can be used to solve static, quasi-static and time-dependent problems. The governing equations of the motion are ones of the classical mechanics, such as the conservation of mass and the balance of linear and angular momentum, which provide the equilibrium configuration of the material. The main corresponding mathematical framework is described in the main Raymond Ogden's book Non-linear elastic deformations and in Biot's book Mechanics of incremental deformations, which is a collection of his main papers.

References

  1. Peter D. Lax; Maria Shea Terrell (2013). Calculus With Applications. Springer. p. 119. ISBN   978-1-4614-7946-8.
  2. Shirley O. Hockett; David Bock (2005). Barron's how to Prepare for the AP Calculus . Barron's Educational Series. p.  44. ISBN   978-0-7641-2382-5.
  3. Mark Ryan (2010). Calculus Essentials For Dummies. John Wiley & Sons. pp. 41–47. ISBN   978-0-470-64269-6.
  4. Karla Neal; R. Gustafson; Jeff Hughes (2012). Precalculus. Cengage Learning. p. 133. ISBN   978-0-495-82662-0.
  5. 1 2 3 Michael Comenetz (2002). Calculus: The Elements. World Scientific. pp. 71–76 and 151–161. ISBN   978-981-02-4904-5.
  6. Moritz Pasch (2010). Essays on the Foundations of Mathematics by Moritz Pasch. Springer. p. 157. ISBN   978-90-481-9416-2.
  7. Frank C. Wilson; Scott Adamson (2008). Applied Calculus. Cengage Learning. p. 177. ISBN   978-0-618-61104-1.
  8. 1 2 Tamara Lefcourt Ruby; James Sellers; Lisa Korf; Jeremy Van Horn; Mike Munn (2014). Kaplan AP Calculus AB & BC 2015. Kaplan Publishing. p. 299. ISBN   978-1-61865-686-5.
  9. 1 2 Thomas Hungerford; Douglas Shaw (2008). Contemporary Precalculus: A Graphing Approach. Cengage Learning. pp. 211–212. ISBN   978-0-495-10833-7.
  10. 1 2 Steven G. Krantz (2014). Foundations of Analysis. CRC Press. p. 127. ISBN   978-1-4822-2075-9.
  11. Andreas Griewank; Andrea Walther (2008). Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM. pp. 2–. ISBN   978-0-89871-659-7.
  12. Serge Lang (1968). Analysis 1 . Addison-Wesley Publishing Company. p.  56.
  13. Brian D. Hahn (1994). Fortran 90 for Scientists and Engineers. Elsevier. p. 276. ISBN   978-0-340-60034-4.
  14. Christopher Clapham; James Nicholson (2009). The Concise Oxford Dictionary of Mathematics . Oxford University Press. p.  313. ISBN   978-0-19-157976-9.
  15. Donald C. Benson, A Smoother Pebble: Mathematical Explorations, Oxford University Press, 2003, p. 176.