Symmetric derivative

Last updated

In mathematics, the symmetric derivative is an operation generalizing the ordinary derivative. It is defined as [1] [2]

Contents

The expression under the limit is sometimes called the symmetric difference quotient . [3] [4] A function is said to be symmetrically differentiable at a point x if its symmetric derivative exists at that point.

If a function is differentiable (in the usual sense) at a point, then it is also symmetrically differentiable, but the converse is not true. A well-known counterexample is the absolute value function f(x) = |x|, which is not differentiable at x = 0, but is symmetrically differentiable here with symmetric derivative 0. For differentiable functions, the symmetric difference quotient does provide a better numerical approximation of the derivative than the usual difference quotient. [3]

The symmetric derivative at a given point equals the arithmetic mean of the left and right derivatives at that point, if the latter two both exist. [1] [2] :6

Neither Rolle's theorem nor the mean-value theorem hold for the symmetric derivative; some similar but weaker statements have been proved.

Examples

The absolute value function

Graph of the absolute value function. Note the sharp turn at x = 0, leading to non-differentiability of the curve at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0. Modulusfunction.png
Graph of the absolute value function. Note the sharp turn at x = 0, leading to non-differentiability of the curve at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0.

For the absolute value function , using the notation for the symmetric derivative, we have at that

Hence the symmetric derivative of the absolute value function exists at and is equal to zero, even though its ordinary derivative does not exist at that point (due to a "sharp" turn in the curve at ).

Note that in this example both the left and right derivatives at 0 exist, but they are unequal (one is −1, while the other is +1); their average is 0, as expected.

The function x−2

Graph of y = 1/x. Note the discontinuity at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0. Graphinversesqrt.png
Graph of y = 1/x. Note the discontinuity at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0.

For the function , at we have

Again, for this function the symmetric derivative exists at , while its ordinary derivative does not exist at due to discontinuity in the curve there. Furthermore, neither the left nor the right derivative is finite at 0, i.e. this is an essential discontinuity.

The Dirichlet function

The Dirichlet function, defined as

has a symmetric derivative at every , but is not symmetrically differentiable at any ; i.e. the symmetric derivative exists at rational numbers but not at irrational numbers.

Quasi-mean-value theorem

The symmetric derivative does not obey the usual mean-value theorem (of Lagrange). As a counterexample, the symmetric derivative of f(x) = |x| has the image {−1, 0, 1}, but secants for f can have a wider range of slopes; for instance, on the interval [−1, 2], the mean-value theorem would mandate that there exist a point where the (symmetric) derivative takes the value . [5]

A theorem somewhat analogous to Rolle's theorem but for the symmetric derivative was established in 1967 by C. E. Aull, who named it quasi-Rolle theorem. If f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), and f(a) = f(b) = 0, then there exist two points x, y in (a, b) such that fs(x) ≥ 0, and fs(y) ≤ 0. A lemma also established by Aull as a stepping stone to this theorem states that if f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), and additionally f(b) > f(a), then there exist a point z in (a, b) where the symmetric derivative is non-negative, or with the notation used above, fs(z) ≥ 0. Analogously, if f(b) < f(a), then there exists a point z in (a, b) where fs(z) ≤ 0. [5]

The quasi-mean-value theorem for a symmetrically differentiable function states that if f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), then there exist x, y in (a, b) such that [5] [2] :7

As an application, the quasi-mean-value theorem for f(x) = |x| on an interval containing 0 predicts that the slope of any secant of f is between −1 and 1.

If the symmetric derivative of f has the Darboux property, then the (form of the) regular mean-value theorem (of Lagrange) holds, i.e. there exists z in (a, b) such that [5]

As a consequence, if a function is continuous and its symmetric derivative is also continuous (thus has the Darboux property), then the function is differentiable in the usual sense. [5]

Generalizations

The notion generalizes to higher-order symmetric derivatives and also to n-dimensional Euclidean spaces.

The second symmetric derivative

The second symmetric derivative is defined as [6] [2] :1

If the (usual) second derivative exists, then the second symmetric derivative exists and is equal to it. [6] The second symmetric derivative may exist, however, even when the (ordinary) second derivative does not. As example, consider the sign function , which is defined by

The sign function is not continuous at zero, and therefore the second derivative for does not exist. But the second symmetric derivative exists for :

See also

Related Research Articles

<span class="mw-page-title-main">Absolute value</span> Distance from zero to a number

In mathematics, the absolute value or modulus of a real number , denoted , is the non-negative value of without regard to its sign. Namely, if is a positive number, and if is negative, and . For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.

<span class="mw-page-title-main">Continuous function</span> Mathematical function with no sudden changes

In mathematics, a continuous function is a function such that a continuous variation of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Up until the 19th century, mathematicians largely relied on intuitive notions of continuity, and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.

<span class="mw-page-title-main">L'Hôpital's rule</span> Mathematical rule for evaluating some limits

L'Hôpital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital. Although the rule is often attributed to l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.

<span class="mw-page-title-main">Mean value theorem</span> On the existence of a tangent to an arc parallel to the line through its endpoints

In mathematics, the mean value theorem states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval.

In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

<span class="mw-page-title-main">Taylor's theorem</span> Approximation of a function by a truncated power series

In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

<span class="mw-page-title-main">Harmonic function</span> Functions in mathematics

In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function where U is an open subset of that satisfies Laplace's equation, that is,

<span class="mw-page-title-main">Rolle's theorem</span> On stationary points between two equal values of a real differentiable function

In calculus, Rolle's theorem or Rolle's lemma essentially states that any real-valued differentiable function that attains equal values at two distinct points must have at least one stationary point somewhere between them—that is, a point where the first derivative is zero. The theorem is named after Michel Rolle.

<span class="mw-page-title-main">Differentiable function</span> Mathematical function whose derivative exists

In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth and does not contain any break, angle, or cusp.

<span class="mw-page-title-main">Sign function</span> Mathematical function returning -1, 0 or 1

In mathematics, the sign function or signum function is a function that returns the sign of a real number. In mathematical notation the sign function is often represented as .

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic.

In calculus, a branch of mathematics, the notions of one-sided differentiability and semi-differentiability of a real-valued function f of a real variable are weaker than differentiability. Specifically, the function f is said to be right differentiable at a point a if, roughly speaking, a derivative can be defined as the function's argument x moves to a from the right, and left differentiable at a if the derivative can be defined as x moves to a from the left.

<span class="mw-page-title-main">Second derivative</span> Mathematical operation

In calculus, the second derivative, or the second-order derivative, of a function f is the derivative of the derivative of f. Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:

<span class="mw-page-title-main">Characteristic function (probability theory)</span> Fourier transform of the probability density function

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

<span class="mw-page-title-main">Subderivative</span> Generalization of derivatives to real-valued functions

In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.

In mathematics, Fermat's theorem is a method to find local maxima and minima of differentiable functions on open sets by showing that every local extremum of the function is a stationary point. Fermat's theorem is a theorem in real analysis, named after Pierre de Fermat.

<span class="mw-page-title-main">Fundamental theorem of calculus</span> Relationship between derivatives and integrals

The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. The two operations are inverses of each other apart from a constant value which depends on where one starts to compute area.

<span class="mw-page-title-main">Glossary of calculus</span> List of definitions of terms and concepts commonly used in calculus

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

<span class="mw-page-title-main">Calculus on Euclidean space</span>

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somehow more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References

  1. 1 2 Peter R. Mercer (2014). More Calculus of a Single Variable. Springer. p. 173. ISBN   978-1-4939-1926-0.
  2. 1 2 3 4 Thomson, Brian S. (1994). Symmetric Properties of Real Functions. Marcel Dekker. ISBN   0-8247-9230-0.
  3. 1 2 Peter D. Lax; Maria Shea Terrell (2013). Calculus With Applications. Springer. p. 213. ISBN   978-1-4614-7946-8.
  4. Shirley O. Hockett; David Bock (2005). Barron's how to Prepare for the AP Calculus . Barron's Educational Series. pp.  53. ISBN   978-0-7641-2382-5.
  5. 1 2 3 4 5 Sahoo, Prasanna; Riedel, Thomas (1998). Mean Value Theorems and Functional Equations. World Scientific. pp. 188–192. ISBN   978-981-02-3544-4.
  6. 1 2 A. Zygmund (2002). Trigonometric Series. Cambridge University Press. pp. 22–23. ISBN   978-0-521-89053-3.