Product rule

Last updated • 10 min readFrom Wikipedia, The Free Encyclopedia

Geometric illustration of a proof of the product rule Schema Regle produit.png
Geometric illustration of a proof of the product rule

In calculus, the product rule (or Leibniz rule [1] or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as

Contents

The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.

Discovery

Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using "infinitesimals" (a precursor to the modern differential). [2] (However, J. M. Child, a translator of Leibniz's papers, [3] argues that it is due to Isaac Barrow.) Here is Leibniz's argument: [4] Let u and v be functions. Then d(uv) is the same thing as the difference between two successive uv's; let one of these be uv, and the other u+du times v+dv; then:

Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain which can also be written in Lagrange's notation as

Examples

Proofs

Limit definition of derivative

Let h(x) = f(x)g(x) and suppose that f and g are each differentiable at x. We want to prove that h is differentiable at x and that its derivative, h(x), is given by f(x)g(x) + f(x)g(x). To do this, (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. The fact that follows from the fact that differentiable functions are continuous.

Linear approximations

By definition, if are differentiable at , then we can write linear approximations: and where the error terms are small with respect to h: that is, also written . Then: The "error terms" consist of items such as and which are easily seen to have magnitude Dividing by and taking the limit gives the result.

Quarter squares

This proof uses the chain rule and the quarter square function with derivative . We have: and differentiating both sides gives:

Multivariable chain rule

The product rule can be considered a special case of the chain rule for several variables, applied to the multiplication function :

Non-standard analysis

Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals within the framework of non-standard analysis, specifically the hyperreal numbers. Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives This was essentially Leibniz's proof exploiting the transcendental law of homogeneity (in place of the standard part above).

Smooth infinitesimal analysis

In the context of Lawvere's approach to infinitesimals, let be a nilsquare infinitesimal. Then and , so that since Dividing by then gives or .

Logarithmic differentiation

Let . Taking the absolute value of each function and the natural log of both sides of the equation, Applying properties of the absolute value and logarithms, Taking the logarithmic derivative of both sides and then solving for : Solving for and substituting back for gives: Note: Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because , which justifies taking the absolute value of the functions for logarithmic differentiation.

Generalizations

Product of more than two factors

The product rule can be generalized to products of more than two factors. For example, for three factors we have For a collection of functions , we have

The logarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve any recursion. The logarithmic derivative of a function f, denoted here Logder(f), is the derivative of the logarithm of the function. It follows that Using that the logarithm of a product is the sum of the logarithms of the factors, the sum rule for derivatives gives immediately The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the

Higher derivatives

It can also be generalized to the general Leibniz rule for the nth derivative of a product of two factors, by symbolically expanding according to the binomial theorem:

Applied at a specific point x, the above formula gives:

Furthermore, for the nth derivative of an arbitrary number of factors, one has a similar formula with multinomial coefficients:

Higher partial derivatives

For partial derivatives, we have [5] where the index S runs through all 2n subsets of {1, ..., n}, and |S| is the cardinality of S. For example, when n = 3,

Banach space

Suppose X, Y, and Z are Banach spaces (which includes Euclidean space) and B : X×YZ is a continuous bilinear operator. Then B is differentiable, and its derivative at the point (x,y) in X×Y is the linear map D(x,y)B : X×YZ given by

This result can be extended [6] to more general topological vector spaces.

In vector calculus

The product rule extends to various product operations of vector functions on : [7]

There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient:

Such a rule will hold for any continuous bilinear product operation. Let B : X×YZ be a continuous bilinear map between vector spaces, and let f and g be differentiable functions into X and Y, respectively. The only properties of multiplication used in the proof using the limit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation, This is also a special case of the product rule for bilinear maps in Banach space.

Derivations in abstract algebra and differential geometry

In abstract algebra, the product rule is the defining property of a derivation. In this terminology, the product rule states that the derivative operator is a derivation on functions.

In differential geometry, a tangent vector to a manifold M at a point p may be defined abstractly as an operator on real-valued functions which behaves like a directional derivative at p: that is, a linear functional v which is a derivation, Generalizing (and dualizing) the formulas of vector calculus to an n-dimensional manifold M, one may take differential forms of degrees k and l, denoted , with the wedge or exterior product operation , as well as the exterior derivative . Then one has the graded Leibniz rule:

Applications

Among the applications of the product rule is a proof that when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by mathematical induction on the exponent n. If n = 0 then xn is constant and nxn  1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent n, then for the next value, n + 1, we have Therefore, if the proposition is true for n, it is true also for n + 1, and therefore for all natural n.

See also

Related Research Articles

In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if is the function such that for every x, then the chain rule is, in Lagrange's notation, or, equivalently,

In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.

<span class="mw-page-title-main">Feynman diagram</span> Pictorial representation of the behavior of subatomic particles

In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948. The interaction of subatomic particles can be complex and difficult to understand; Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics." While the diagrams are applied primarily to quantum field theory, they can also be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle."

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem. In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Leibniz's notation</span> Mathematical notation used for calculus

In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to represent infinitely small increments of x and y, respectively, just as Δx and Δy represent finite increments of x and y, respectively.

A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.

In mathematics, the total derivative of a function f at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when f is a function of several variables, because when f is a function of a single variable, the total derivative is the same as the ordinary derivative of the function.

In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.

<span class="mw-page-title-main">Second derivative</span> Mathematical operation

In calculus, the second derivative, or the second-order derivative, of a function f is the derivative of the derivative of f. Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation: where a is acceleration, v is velocity, t is time, x is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position with respect to time.

In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

In calculus, the differential represents the principal part of the change in a function with respect to changes in the independent variable. The differential is defined by where is the derivative of f with respect to , and is an additional real variable. The notation is such that the equation

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References

  1. "Leibniz rule – Encyclopedia of Mathematics".
  2. Michelle Cirillo (August 2007). "Humanizing Calculus" . The Mathematics Teacher. 101 (1): 23–27. doi:10.5951/MT.101.1.0023.
  3. Leibniz, G. W. (2005) [1920], The Early Mathematical Manuscripts of Leibniz (PDF), translated by J.M. Child, Dover, p. 28, footnote 58, ISBN   978-0-486-44596-0
  4. Leibniz, G. W. (2005) [1920], The Early Mathematical Manuscripts of Leibniz (PDF), translated by J.M. Child, Dover, p. 143, ISBN   978-0-486-44596-0
  5. Micheal Hardy (January 2006). "Combinatorics of Partial Derivatives" (PDF). The Electronic Journal of Combinatorics. 13. arXiv: math/0601149 . Bibcode:2006math......1149H.
  6. Kreigl, Andreas; Michor, Peter (1997). The Convenient Setting of Global Analysis (PDF). American Mathematical Society. p. 59. ISBN   0-8218-0780-3.
  7. Stewart, James (2016), Calculus (8 ed.), Cengage, Section 13.2.