Linearity of differentiation

Last updated

In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; [1] this property is known as linearity of differentiation, the rule of linearity, [2] or the superposition rule for differentiation. [3] It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). [4] [5] Thus it can be said that differentiation is linear, or the differential operator is a linear operator. [6]

Contents

Statement and derivation

Let f and g be functions, with α and β constants. Now consider

By the sum rule in differentiation, this is

and by the constant factor rule in differentiation, this reduces to

Therefore,

Omitting the brackets, this is often written as:

Detailed proofs/derivations from definition

We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown.

Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to . The difference rule is obtained by setting the first constant coefficient to and the second constant coefficient to . The constant factor rule is obtained by setting either the second constant coefficient or the second function to . (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to . One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.)

On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of . This would, when simplified, give us the difference rule for differentiation.

In the proofs/derivations below, [7] [8] the coefficients are used; they correspond to the coefficients above.

Linearity (directly)

Let . Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let .

We want to prove that .

By definition, we can see that


In order to use the limits law for the sum of limits, we need to know that and both individually exist. For these smaller limits, we need to know that and both individually exist to use the coefficient law for limits. By definition, and . So, if we know that and both exist, we will know that and both individually exist. This allows us to use the coefficient law for limits to write

and

With this, we can go back to apply the limit law for the sum of limits, since we know that and both individually exist. From here, we can directly go back to the derivative we were working on.Finally, we have shown what we claimed in the beginning: .

Sum

Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let .

We want to prove that .

By definition, we can see that

In order to use the law for the sum of limits here, we need to show that the individual limits, and both exist. By definition, and , so the limits exist whenever the derivatives and exist. So, assuming that the derivatives exist, we can continue the above derivation


Thus, we have shown what we wanted to show, that: .

Difference

Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let .

We want to prove that .

By definition, we can see that:

In order to use the law for the difference of limits here, we need to show that the individual limits, and both exist. By definition, and that , so these limits exist whenever the derivatives and exist. So, assuming that the derivatives exist, we can continue the above derivation

Thus, we have shown what we wanted to show, that: .

Constant coefficient

Let be a function. Let ; will be the constant coefficient. Let be a function, where j is defined only where is defined. (In other words, the domain of is equal to the domain of .) Let be in the domain of . Let .

We want to prove that .

By definition, we can see that:

Now, in order to use a limit law for constant coefficients to show that

we need to show that exists. However, , by the definition of the derivative. So, if exists, then exists.

Thus, if we assume that exists, we can use the limit law and continue our proof.

Thus, we have proven that when , we have .

See also

Related Research Articles

In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if is the function such that for every x, then the chain rule is, in Lagrange's notation, or, equivalently,

The derivative is a fundamental tool of calculus that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.

<span class="mw-page-title-main">Gradient</span> Multivariate derivative (mathematics)

In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:

<span class="mw-page-title-main">L'Hôpital's rule</span> Mathematical rule for evaluating some limits

L'Hôpital's rule or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.

<span class="mw-page-title-main">Taylor's theorem</span> Approximation of a function by a truncated power series

In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

In mathematics, a power series is an infinite series of the form

In calculus, the quotient rule is a method of finding the derivative of a function that is the ratio of two differentiable functions. Let , where both f and g are differentiable and The quotient rule states that the derivative of h(x) is

<span class="mw-page-title-main">Product rule</span> Formula for the derivative of a product

In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as

<span class="mw-page-title-main">Poisson bracket</span> Operation in Hamiltonian mechanics

In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

<span class="mw-page-title-main">Legendre transformation</span> Mathematical transformation

In mathematics, the Legendre transformation, first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function.

A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form

In mathematics, the Fréchet derivative is a derivative defined on normed spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued function of multiple real variables, and to define the functional derivative used widely in the calculus of variations.

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. The two operations are inverses of each other apart from a constant value which depends on where one starts to compute area.

In calculus, the differential represents the principal part of the change in a function with respect to changes in the independent variable. The differential is defined by

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

In analytic number theory, a Dirichlet series, or Dirichlet generating function (DGF), of a sequence is a common way of understanding and summing arithmetic functions in a meaningful way. A little known, or at least often forgotten about, way of expressing formulas for arithmetic functions and their summatory functions is to perform an integral transform that inverts the operation of forming the DGF of a sequence. This inversion is analogous to performing an inverse Z-transform to the generating function of a sequence to express formulas for the series coefficients of a given ordinary generating function.

References

  1. Blank, Brian E.; Krantz, Steven George (2006), Calculus: Single Variable, Volume 1, Springer, p. 177, ISBN   9781931914598 .
  2. Strang, Gilbert (1991), Calculus, Volume 1, SIAM, pp. 71–72, ISBN   9780961408824 .
  3. Stroyan, K. D. (2014), Calculus Using Mathematica, Academic Press, p. 89, ISBN   9781483267975 .
  4. Estep, Donald (2002), "20.1 Linear Combinations of Functions", Practical Analysis in One Variable, Undergraduate Texts in Mathematics, Springer, pp. 259–260, ISBN   9780387954844 .
  5. Zorn, Paul (2010), Understanding Real Analysis, CRC Press, p. 184, ISBN   9781439894323 .
  6. Gockenbach, Mark S. (2011), Finite-Dimensional Linear Algebra, Discrete Mathematics and Its Applications, CRC Press, p. 103, ISBN   9781439815649 .
  7. "Differentiation Rules". CEMC's Open Courseware. Retrieved 3 May 2022.
  8. Dawkins, Paul. "Proof Of Various Derivative Properties". Paul's Online Notes. Retrieved 3 May 2022.