Second derivative

Last updated • 5 min readFrom Wikipedia, The Free Encyclopedia

The second derivative of a quadratic function is constant. 4 fonctions du second degre.svg
The second derivative of a quadratic function is constant.

In calculus, the second derivative, or the second-order derivative, of a function f is the derivative of the derivative of f. Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:

Contents

where a is acceleration, v is velocity, t is time, x is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position (x) with respect to time.

On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.

Second derivative power rule

The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows:

Notation

The second derivative of a function is usually denoted . [1] [2] That is:

When using Leibniz's notation for derivatives, the second derivative of a dependent variable y with respect to an independent variable x is written

This notation is derived from the following formula:

Example

Given the function

the derivative of f is the function

The second derivative of f is the derivative of , namely

Relation to the graph

A plot of
f
(
x
)
=
sin
[?]
(
2
x
)
{\displaystyle f(x)=\sin(2x)}
from
-
p
/
4
{\displaystyle -\pi /4}
to
5
p
/
4
{\displaystyle 5\pi /4}
. The tangent line is blue where the curve is concave up, green where the curve is concave down, and red at the inflection points (0,
p
{\displaystyle \pi }
/2, and
p
{\displaystyle \pi }
). Animated illustration of inflection point.gif
A plot of from to . The tangent line is blue where the curve is concave up, green where the curve is concave down, and red at the inflection points (0, /2, and ).

Concavity

The second derivative of a function f can be used to determine the concavity of the graph of f. [2] A function whose second derivative is positive is said to be concave up (also referred to as convex), meaning that the tangent line near the point where it touches the function will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (sometimes simply called concave), and its tangent line will lie above the graph of the function near the point of contact.

Inflection points

If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an inflection point. Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection.

Second derivative test

The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e., a point where ) is a local maximum or a local minimum. Specifically,

The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly, the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration.

Limit

It is possible to write a single limit for the second derivative:

The limit is called the second symmetric derivative. [3] [4] The second symmetric derivative may exist even when the (usual) second derivative does not.

The expression on the right can be written as a difference quotient of difference quotients:

This limit can be viewed as a continuous version of the second difference for sequences.

However, the existence of the above limit does not mean that the function has a second derivative. The limit above just gives a possibility for calculating the second derivative—but does not provide a definition. A counterexample is the sign function , which is defined as:

The sign function is not continuous at zero, and therefore the second derivative for does not exist. But the above limit exists for :

Quadratic approximation

Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function f. This is the quadratic function whose first and second derivatives are the same as those of f at a given point. The formula for the best quadratic approximation to a function f around the point x = a is

This quadratic approximation is the second-order Taylor polynomial for the function centered at x = a.

Eigenvalues and eigenvectors of the second derivative

For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming and homogeneous Dirichlet boundary conditions (i.e., where v is the eigenvector), the eigenvalues are and the corresponding eigenvectors (also called eigenfunctions) are . Here, , for .

For other well-known cases, see Eigenvalues and eigenvectors of the second derivative.

Generalization to higher dimensions

The Hessian

The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function f: R3R, these include the three second-order partials

and the mixed partials

If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the Hessian. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test.)

The Laplacian

Another common generalization of the second derivative is the Laplacian. This is the differential operator (or ) defined by

The Laplacian of a function is equal to the divergence of the gradient, and the trace of the Hessian matrix.

See also

Related Research Articles

<span class="mw-page-title-main">Curl (mathematics)</span> Circulation density in a vector field

In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.

The derivative is a fundamental tool of calculus that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.

<span class="mw-page-title-main">Tangent</span> In mathematics, straight line touching a plane curve without crossing it

In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is tangent to the curve y = f(x) at a point x = c if the line passes through the point (c, f(c)) on the curve and has slope f'(c), where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.

<span class="mw-page-title-main">Differential calculus</span> Area of mathematics; subarea of calculus

In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.

In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry.

<span class="mw-page-title-main">Heaviside step function</span> Indicator function of positive numbers

The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for nonnegative arguments. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The resulting calculus, known as exterior calculus, allows for a natural, metric-independent generalization of Stokes' theorem, Gauss's theorem, and Green's theorem from vector calculus.

<span class="mw-page-title-main">Product rule</span> Formula for the derivative of a product

In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as

In mathematics, the Gibbs phenomenon is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The th partial Fourier series of the function produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere except points of discontinuity.

<span class="mw-page-title-main">Sign function</span> Mathematical function returning -1, 0 or 1

In mathematics, the sign function or signum function is a function that returns the sign of a real number. In mathematical notation the sign function is often represented as .

In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.

In mathematics, the symmetric derivative is an operation generalizing the ordinary derivative.

In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

<span class="mw-page-title-main">Differentiation of trigonometric functions</span> Mathematical process of finding the derivative of a trigonometric function

The differentiation of trigonometric functions is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written sin′(a) = cos(a), meaning that the rate of change of sin(x) at a particular angle x = a is given by the cosine of that angle.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References

  1. "Content - The second derivative". amsi.org.au. Retrieved 2020-09-16.
  2. 1 2 "Second Derivatives". Math24. Retrieved 2020-09-16.
  3. A. Zygmund (2002). Trigonometric Series. Cambridge University Press. pp. 22–23. ISBN   978-0-521-89053-3.
  4. Thomson, Brian S. (1994). Symmetric Properties of Real Functions. Marcel Dekker. p. 1. ISBN   0-8247-9230-0.

Further reading

Print

Online books