|Part of a series of articles about|
In mathematics, the mean value theorem states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval.
More precisely, the theorem states that if is a continuous function on the closed interval and differentiable on the open interval , then there exists a point in such that the tangent at is parallel to the secant line through the endpoints and , that is,
A special case of this theorem was first described by Parameshvara (1370–1460), from the Kerala School of Astronomy and Mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II.A restricted form of the theorem was proved by Michel Rolle in 1691; the result was what is now known as Rolle's theorem, and was proved only for polynomials, without the techniques of calculus. The mean value theorem in its modern form was stated and proved by Augustin Louis Cauchy in 1823. Many variations of this theorem have been proved since then.
Let be a continuous function on the closed interval , and differentiable on the open interval , where . Then there exists some in such that
The mean value theorem is a generalization of Rolle's theorem, which assumes , so that the right-hand side above is zero.
The mean value theorem is still valid in a slightly more general setting. One only needs to assume that is continuous on , and that for every in the limit
exists as a finite number or equals or . If finite, that limit equals . An example where this version of the theorem applies is given by the real-valued cube root function mapping , whose derivative tends to infinity at the origin.
Note that the theorem, as stated, is false if a differentiable function is complex-valued instead of real-valued. For example, define for all real . Then
while for any real .
These formal statements are also known as Lagrange's Mean Value Theorem.
The expression gives the slope of the line joining the points and , which is a chord of the graph of , while gives the slope of the tangent to the curve at the point . Thus the mean value theorem says that given any chord of a smooth curve, we can find a point on the curve lying between the end-points of the chord such that the tangent of the curve at that point is parallel to the chord. The following proof illustrates this idea.
Define , where is a constant. Since is continuous on and differentiable on , the same is true for . We now want to choose so that satisfies the conditions of Rolle's theorem. Namely
By Rolle's theorem, since is differentiable and , there is some in for which , and it follows from the equality that,
Proof: Assume the derivative of f at every interior point of the interval I exists and is zero. Let (a, b) be an arbitrary open interval in I. By the mean value theorem, there exists a point c in (a,b) such that
This implies that f(a) = f(b). Thus, f is constant on the interior of I and thus is constant on I by continuity. (See below for a multivariable version of this result.)
Proof: Let F = f − g, then F' = f' − g' = 0 on the interval (a, b), so the above theorem 1 tells that F = f − g is a constant c or f = g + c.
Proof: It is directly derived from the above theorem 2.
Cauchy's mean value theorem, also known as the extended mean value theorem, and are both continuous on the closed interval and differentiable on the open interval , then there exists some , such thatis a generalization of the mean value theorem. It states: if the functions
Of course, if and , this is equivalent to:
Geometrically, this means that there is some tangent to the graph of the curve
which is parallel to the line defined by the points and . However, Cauchy's theorem does not claim the existence of such a tangent in all cases where and are distinct points, since it might be satisfied only for some value with , in other words a value for which the mentioned curve is stationary; in such points no tangent to the curve is likely to be defined at all. An example of this situation is the curve given by
which on the interval goes from the point to , yet never has a horizontal tangent; however it has a stationary point (in fact a cusp) at .
Cauchy's mean value theorem can be used to prove L'Hôpital's rule. The mean value theorem is the special case of Cauchy's mean value theorem when .
The proof of Cauchy's mean value theorem is based on the same idea as the proof of the mean value theorem.
Since and are continuous on and differentiable on , the same is true for . All in all, satisfies the conditions of Rolle's theorem: consequently, there is some in for which . Now using the definition of we have:
Assume that and are differentiable functions on that are continuous on . Define
There exists such that .
and if we place , we get Cauchy's mean value theorem. If we place and we get Lagrange's mean value theorem.
The proof of the generalization is quite simple: each of and are determinants with two identical rows, hence . The Rolle's theorem implies that there exists such that .
The mean value theorem generalizes to real functions of multiple variables. The trick is to use parametrization to create a real function of one variable, and then apply the one-variable theorem.
Let be an open convex subset of , and let be a differentiable function. Fix points , and define . Since is a differentiable function in one variable, the mean value theorem gives:
for some between 0 and 1. But since and , computing explicitly we have:
where denotes a gradient and a dot product. Note that this is an exact analog of the theorem in one variable (in the case this is the theorem in one variable). By the Cauchy–Schwarz inequality, the equation gives the estimate:
In particular, when the partial derivatives of are bounded, is Lipschitz continuous (and therefore uniformly continuous).
As an application of the above, we prove that is constant if is open and connected and every partial derivative of is 0. Pick some point , and let . We want to show for every . For that, let . Then E is closed and nonempty. It is open too: for every ,
for every in some neighborhood of . (Here, it is crucial that and are sufficiently close to each other.) Since is connected, we conclude .
The above arguments are made in a coordinate-free manner; hence, they generalize to the case when is a subset of a Banach space.
There is no exact analog of the mean value theorem for vector-valued functions.
In Principles of Mathematical Analysis, Rudin gives an inequality which can be applied to many of the same situations to which the mean value theorem is applicable in the one dimensional case:
Theorem — For a continuous vector-valued function differentiable on , there exists such that .
Jean Dieudonné in his classic treatise Foundations of Modern Analysis discards the mean value theorem and replaces it by mean inequality as the proof is not constructive and one cannot find the mean value and in applications one only needs mean inequality. Serge Lang in Analysis I uses the mean value theorem, in integral form, as an instant reflex but this use requires the continuity of the derivative. If one uses the Henstock–Kurzweil integral one can have the mean value theorem in integral form without the additional assumption that derivative should be continuous as every derivative is Henstock–Kurzweil integrable. The problem is roughly speaking the following: If f : U → Rm is a differentiable function (where U ⊂ Rn is open) and if x + th, x, h ∈ Rn, t ∈ [0, 1] is the line segment in question (lying inside U), then one can apply the above parametrization procedure to each of the component functions fi (i = 1, …, m) of f (in the above notation set y = x + h). In doing so one finds points x + tih on the line segment satisfying
But generally there will not be a single point x + t*h on the line segment satisfying
for all isimultaneously. For example, define:
Then , but and are never simultaneously zero as ranges over .
However a certain type of generalization of the mean value theorem to vector-valued functions is obtained as follows: Let f be a continuously differentiable real-valued function defined on an open interval I, and let x as well as x + h be points of I. The mean value theorem in one variable tells us that there exists some t* between 0 and 1 such that
On the other hand, we have, by the fundamental theorem of calculus followed by a change of variables,
Thus, the value f′(x + t*h) at the particular point t* has been replaced by the mean value
This last version can be generalized to vector valued functions:
Lemma 1 — Let U ⊂ Rn be open, f : U → Rm continuously differentiable, and x ∈ U, h ∈ Rn vectors such that the line segment x + th, 0 ≤ t ≤ 1 remains in U. Then we have:
where Df denotes the Jacobian matrix of f and the integral of a matrix is to be understood componentwise.
Proof. Let f1, …, fm denote the components of f and define:
Then we have
The claim follows since Df is the matrix consisting of the components .
Lemma 2 — Let v : [a, b] → Rm be a continuous function defined on the interval [a, b] ⊂ R. Then we have
Proof. Let u in Rm denote the value of the integral
Now we have (using the Cauchy–Schwarz inequality):
Now cancelling the norm of u from both ends gives us the desired inequality.
Mean Value Inequality — If the norm of Df(x + th) is bounded by some constant M for t in [0, 1], then
Proof. From Lemma 1 and 2 it follows that
Let f : [a, b] → R be a continuous function. Then there exists c in [a, b] such that
Since the mean value of f on [a, b] is defined as
we can interpret the conclusion as f achieves its mean value at some c in (a, b).
In general, if f : [a, b] → R is continuous and g is an integrable function that does not change sign on [a, b], then there exists c in (a, b) such that
Suppose f : [a, b] → R is continuous and g is a nonnegative integrable function on [a, b]. By the extreme value theorem, there exists m and M such that for each x in [a, b], and . Since g is nonnegative,
If , we're done since
so for any c in (a, b),
If I ≠ 0, then
By the intermediate value theorem, f attains every value of the interval [m, M], so for some c in [a, b]
Finally, if g is negative on [a, b], then
and we still get the same result as above.
There are various slightly different theorems called the second mean value theorem for definite integrals. A commonly found version is as follows:
Here stands for , the existence of which follows from the conditions. Note that it is essential that the interval (a, b] contains b. A variant not having this requirement is:
If the function returns a multi-dimensional vector, then the MVT for integration is not true, even if the domain of is also multi-dimensional.
For example, consider the following 2-dimensional function defined on an -dimensional cube:
Then, by symmetry it is easy to see that the mean value of over its domain is (0,0):
However, there is no point in which , because everywhere.
Let X and Y be non-negative random variables such that E[X] < E[Y] < ∞ and (i.e. X is smaller than Y in the usual stochastic order). Then there exists an absolutely continuous non-negative random variable Z having probability density function
Let g be a measurable and differentiable function such that E[g(X)], E[g(Y)] < ∞, and let its derivative g′ be measurable and Riemann-integrable on the interval [x, y] for all y ≥ x ≥ 0. Then, E[g′(Z)] is finite and
As noted above, the theorem does not hold for differentiable complex-valued functions. Instead, a generalization of the theorem is stated such:
Let f : Ω → C be a holomorphic function on the open convex set Ω, and let a and b be distinct points in Ω. Then there exist points u, v on Lab (the line segment from a to b) such that
Where Re() is the real part and Im() is the imaginary part of a complex-valued function.
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.
In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with differentiation, integration is a fundamental, essential operation of calculus, and serves as a tool to solve problems in mathematics and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.
In mathematics, the Dirac delta function is a generalized function or distribution, a function on the space of test functions. It was introduced by physicist Paul Dirac. It is called a function, although it is not a function R → C.
In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the kth-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order k of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
In mathematics, the Cauchy integral theorem in complex analysis, named after Augustin-Louis Cauchy, is an important statement about line integrals for holomorphic functions in the complex plane. Essentially, it says that if two different paths connect the same two points, and a function is holomorphic everywhere in between the two paths, then the two path integrals of the function will be the same.
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.
In mathematics, the symmetry of second derivatives refers to the possibility under certain conditions of interchanging the order of taking partial derivatives of a function
In mathematics, the Riemann–Liouville integral associates with a real function another function Iαf of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, Iαf is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.
In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers , or a subset of that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form
In mathematics, a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in are called double integrals, and integrals of a function of three variables over a region in are called triple integrals. For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration.
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
In mathematics, the Hardy–Littlewood maximal operatorM is a significant non-linear operator used in real analysis and harmonic analysis. It takes a locally integrable function f : Rd → C and returns another function Mf that, at each point x ∈ Rd, gives the maximum average value that f can have on balls centered at that point. More precisely,
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
In calculus, interchange of the order of integration is a methodology that transforms iterated integrals of functions into other, hopefully simpler, integrals by changing the order in which the integrations are performed. In some cases, the order of integration can be validly interchanged; in others it cannot.
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. The two operations are inverses of each other apart from a constant value which depends where one starts to compute area.
In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.