Part of a series of articles about |
Calculus |
---|
This article is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
Unless otherwise stated, all functions are functions of real numbers () that return real values, although, more generally, the formulas below apply wherever they are well defined, [1] [2] including the case of complex numbers (). [3]
For any value of , where , if is the constant function given by , then . [4]
Let and . By the definition of the derivative:
This computation shows that the derivative of any constant function is 0.
The derivative of the function at a point is the slope of the line tangent to the curve at the point. The slope of the constant function is 0, because the tangent line to the constant function is horizontal and its angle is 0.
In other words, the value of the constant function, , will not change as the value of increases or decreases.
For any functions and and any real numbers and , the derivative of the function with respect to is .
In Leibniz's notation, this formula is written as:
Special cases include:
For the functions and , the derivative of the function with respect to is:
In Leibniz's notation, this formula is written:
The derivative of the function is:
In Leibniz's notation, this formula is written as: often abridged to:
Focusing on the notion of maps, and the differential being a map , this formula is written in a more concise way as:
If the function has an inverse function , meaning that and , then:
In Leibniz notation, this formula is written as:
If , for any real number , then:
When , this formula becomes the special case that, if , then .
Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial.
The derivative of for any (nonvanishing) function is: wherever is nonzero.
In Leibniz's notation, this formula is written:
The reciprocal rule can be derived either from the quotient rule or from the combination of power rule and chain rule.
If and are functions, then: wherever is nonzero.
This can be derived from the product rule and the reciprocal rule.
The elementary power rule generalizes considerably. The most general power rule is the functional power rule: for any functions and , wherever both sides are well defined.
Special cases:
The equation above is true for all , but the derivative for yields a complex number.
The equation above is also true for all but yields a complex number if .
where is the Lambert W function.
The logarithmic derivative is another way of stating the rule for differentiating the logarithm of a function (using the chain rule): wherever is positive.
Logarithmic differentiation is a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative.[ citation needed ]
Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction—each of which may lead to a simplified expression for taking derivatives.
The derivatives in the table above are for when the range of the inverse secant is and when the range of the inverse cosecant is .
It is common to additionally define an inverse tangent function with two arguments, . Its value lies in the range and reflects the quadrant of the point . For the first and fourth quadrant (i.e., ), one has . Its partial derivatives are:
with being the digamma function, expressed by the parenthesized expression to the right of in the line above.
Suppose that it is required to differentiate with respect to the function:
where the functions and are both continuous in both and in some region of the plane, including , where , and the functions and are both continuous and both have continuous derivatives for . Then, for :
This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus.
Some rules exist for computing the th derivative of functions, where is a positive integer, including:
If and are -times differentiable, then: where and the set consists of all non-negative integer solutions of the Diophantine equation .
If and are -times differentiable, then:
In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if is the function such that for every x, then the chain rule is, in Lagrange's notation, or, equivalently,
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
L'Hôpital's rule or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.
In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.
In calculus, the power rule is used to differentiate functions of the form , whenever is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives.
In calculus, the inverse function rule is a formula that expresses the derivative of the inverse of a bijective and differentiable function f in terms of the derivative of f. More precisely, if the inverse of is denoted as , where if and only if , then the inverse function rule is, in Lagrange's notation,
In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."
Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.
In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as
In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.
In mathematics, the Legendre transformation, first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function.
In mathematics, the exponential function can be characterized in many ways. This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent.
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.
In mathematics, the binomial differential equation is an ordinary differential equation of the form where is a natural number and is a polynomial that is analytic in both variables.
In calculus, logarithmic differentiation or differentiation by taking logarithms is a method used to differentiate functions by employing the logarithmic derivative of a function f,
In integral calculus, the tangent half-angle substitution is a change of variables used for evaluating integrals, which converts a rational function of trigonometric functions of into an ordinary rational function of by setting . This is the one-dimensional stereographic projection of the unit circle parametrized by angle measure onto the real line. The general transformation formula is:
In mathematics, integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse of a continuous and invertible function , in terms of and an antiderivative of . This formula was published in 1905 by Charles-Ange Laisant.
These rules are given in many books, both on elementary and advanced calculus, in pure and applied mathematics. Those in this article (in addition to the above references) can be found in: