Part of a series of articles about |

Calculus |
---|

In calculus, the **second derivative**, or the **second-order derivative**, of a function *f* is the derivative of the derivative of *f*. Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:

- Second derivative power rule
- Notation
- Example
- Relation to the graph
- Concavity
- Inflection points
- Second derivative test
- Limit
- Quadratic approximation
- Eigenvalues and eigenvectors of the second derivative
- Generalization to higher dimensions
- The Hessian
- The Laplacian
- See also
- References
- Further reading
- Online books
- External links

where *a* is acceleration, *v* is velocity, t is time, *x* is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position (*x*) with respect to time.

On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.

The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows:

The second derivative of a function is usually denoted .^{ [1] }^{ [2] } That is:

When using Leibniz's notation for derivatives, the second derivative of a dependent variable y with respect to an independent variable x is written

This notation is derived from the following formula:

Given the function

the derivative of *f* is the function

The second derivative of *f* is the derivative of , namely

The second derivative of a function *f* can be used to determine the **concavity** of the graph of *f*.^{ [2] } A function whose second derivative is positive is said to be concave up (also referred to as convex), meaning that the tangent line near the point where it touches the function will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (sometimes simply called concave), and its tangent line will lie above the graph of the function near the point of contact.

If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an **inflection point**. Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection.

The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e., a point where ) is a local maximum or a local minimum. Specifically,

- If , then has a local maximum at .
- If , then has a local minimum at .
- If , the second derivative test says nothing about the point , a possible inflection point.

The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly, the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration.

It is possible to write a single limit for the second derivative:

The limit is called the second symmetric derivative.^{ [3] }^{ [4] } The second symmetric derivative may exist even when the (usual) second derivative does not.

The expression on the right can be written as a difference quotient of difference quotients:

This limit can be viewed as a continuous version of the second difference for sequences.

However, the existence of the above limit does not mean that the function has a second derivative. The limit above just gives a possibility for calculating the second derivative—but does not provide a definition. A counterexample is the sign function , which is defined as:

The sign function is not continuous at zero, and therefore the second derivative for does not exist. But the above limit exists for :

Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function *f*. This is the quadratic function whose first and second derivatives are the same as those of *f* at a given point. The formula for the best quadratic approximation to a function *f* around the point *x* = *a* is

This quadratic approximation is the second-order Taylor polynomial for the function centered at *x* = *a*.

For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming and homogeneous Dirichlet boundary conditions (i.e., where *v* is the eigenvector), the eigenvalues are and the corresponding eigenvectors (also called eigenfunctions) are . Here, , for .

For other well-known cases, see Eigenvalues and eigenvectors of the second derivative.

The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function *f*: **R**^{3} → **R**, these include the three second-order partials

and the mixed partials

If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the **Hessian**. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test.)

Another common generalization of the second derivative is the **Laplacian**. This is the differential operator (or ) defined by

The Laplacian of a function is equal to the divergence of the gradient, and the trace of the Hessian matrix.

- Chirpyness, second derivative of instantaneous phase
- Finite difference, used to approximate second derivative
- Second partial derivative test
- Symmetry of second derivatives

In vector calculus, the **curl**, also known as **rotor**, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.

The **derivative** is a fundamental tool of calculus that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the **instantaneous rate of change**, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called **differentiation**.

In geometry, the **tangent line** (or simply **tangent**) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is tangent to the curve *y* = *f*(*x*) at a point *x* = *c* if the line passes through the point (*c*, *f*(*c*)) on the curve and has slope *f*'(*c*), where *f*' is the derivative of *f*. A similar definition applies to space curves and curves in *n*-dimensional Euclidean space.

In mathematics, **differential calculus** is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.

In mathematics, a **partial derivative** of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry.

The **Heaviside step function**, or the **unit step function**, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for nonnegative arguments. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

In probability theory and statistics, the **beta distribution** is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by *alpha* (*α*) and *beta* (*β*), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

On a differentiable manifold, the **exterior derivative** extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The resulting calculus, known as exterior calculus, allows for a natural, metric-independent generalization of Stokes' theorem, Gauss's theorem, and Green's theorem from vector calculus.

In calculus, the **product rule** is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as

In mathematics, the **Gibbs phenomenon** is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The ^{th} partial Fourier series of the function produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere except points of discontinuity.

In mathematics, the **sign function** or **signum function** is a function that returns the sign of a real number. In mathematical notation the sign function is often represented as .

In multivariable calculus, the **implicit function theorem** is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In mathematics, **matrix calculus** is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.

In mathematics, the **symmetric derivative** is an operation generalizing the ordinary derivative.

In differential calculus, there is no single uniform **notation for differentiation**. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

This is a summary of **differentiation rules**, that is, rules for computing the derivative of a function in calculus.

The **differentiation of trigonometric functions** is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written sin′(*a*) = cos(*a*), meaning that the rate of change of sin(*x*) at a particular angle *x = a* is given by the cosine of that angle.

*Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.*

In mathematics, **calculus on Euclidean space** is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as **advanced calculus**, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

- ↑ "Content - The second derivative".
*amsi.org.au*. Retrieved 2020-09-16. - 1 2 "Second Derivatives".
*Math24*. Retrieved 2020-09-16. - ↑ A. Zygmund (2002).
*Trigonometric Series*. Cambridge University Press. pp. 22–23. ISBN 978-0-521-89053-3. - ↑ Thomson, Brian S. (1994).
*Symmetric Properties of Real Functions*. Marcel Dekker. p. 1. ISBN 0-8247-9230-0.

- Anton, Howard; Bivens, Irl; Davis, Stephen (February 2, 2005),
*Calculus: Early Transcendentals Single and Multivariable*(8th ed.), New York: Wiley, ISBN 978-0-471-47244-5 - Apostol, Tom M. (June 1967),
*Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra*, vol. 1 (2nd ed.), Wiley, ISBN 978-0-471-00005-1 - Apostol, Tom M. (June 1969),
*Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications*, vol. 1 (2nd ed.), Wiley, ISBN 978-0-471-00007-5 - Eves, Howard (January 2, 1990),
*An Introduction to the History of Mathematics*(6th ed.), Brooks Cole, ISBN 978-0-03-029558-4 - Larson, Ron; Hostetler, Robert P.; Edwards, Bruce H. (February 28, 2006),
*Calculus: Early Transcendental Functions*(4th ed.), Houghton Mifflin Company, ISBN 978-0-618-60624-5 - Spivak, Michael (September 1994),
*Calculus*(3rd ed.), Publish or Perish, ISBN 978-0-914098-89-8 - Stewart, James (December 24, 2002),
*Calculus*(5th ed.), Brooks Cole, ISBN 978-0-534-39339-7 - Thompson, Silvanus P. (September 8, 1998),
*Calculus Made Easy*(Revised, Updated, Expanded ed.), New York: St. Martin's Press, ISBN 978-0-312-18548-0

- Crowell, Benjamin (2003),
*Calculus* - Garrett, Paul (2004),
*Notes on First-Year Calculus* - Hussain, Faraz (2006),
*Understanding Calculus* - Keisler, H. Jerome (2000),
*Elementary Calculus: An Approach Using Infinitesimals* - Mauch, Sean (2004),
*Unabridged Version of Sean's Applied Math Book*, archived from the original on 2006-04-15 - Sloughter, Dan (2000),
*Difference Equations to Differential Equations* - Strang, Gilbert (1991),
*Calculus* - Stroyan, Keith D. (1997),
*A Brief Introduction to Infinitesimal Calculus*, archived from the original on 2005-09-11 - Wikibooks,
*Calculus*

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.