Symmetric derivative

Last updated

In mathematics, the symmetric derivative is an operation generalizing the ordinary derivative.

Contents

It is defined as: [1] [2]

The expression under the limit is sometimes called the symmetric difference quotient . [3] [4] A function is said to be symmetrically differentiable at a point x if its symmetric derivative exists at that point.

If a function is differentiable (in the usual sense) at a point, then it is also symmetrically differentiable, but the converse is not true. A well-known counterexample is the absolute value function f(x) = |x|, which is not differentiable at x = 0, but is symmetrically differentiable here with symmetric derivative 0. For differentiable functions, the symmetric difference quotient does provide a better numerical approximation of the derivative than the usual difference quotient. [3]

The symmetric derivative at a given point equals the arithmetic mean of the left and right derivatives at that point, if the latter two both exist. [1] [2] :6

Neither Rolle's theorem nor the mean-value theorem hold for the symmetric derivative; some similar but weaker statements have been proved.

Examples

The absolute value function

Graph of the absolute value function. Note the sharp turn at x = 0, leading to non-differentiability of the curve at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0. Modulusfunction.png
Graph of the absolute value function. Note the sharp turn at x = 0, leading to non-differentiability of the curve at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0.

For the absolute value function , using the notation for the symmetric derivative, we have at that

Hence the symmetric derivative of the absolute value function exists at and is equal to zero, even though its ordinary derivative does not exist at that point (due to a "sharp" turn in the curve at ).

Note that in this example both the left and right derivatives at 0 exist, but they are unequal (one is −1, while the other is +1); their average is 0, as expected.

The function x−2

Graph of y = 1/x. Note the discontinuity at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0. Graphinversesqrt.png
Graph of y = 1/x. Note the discontinuity at x = 0. The function hence possesses no ordinary derivative at x = 0. The symmetric derivative, however, exists for the function at x = 0.

For the function , at we have

Again, for this function the symmetric derivative exists at , while its ordinary derivative does not exist at due to discontinuity in the curve there. Furthermore, neither the left nor the right derivative is finite at 0, i.e. this is an essential discontinuity.

The Dirichlet function

The Dirichlet function, defined as: has a symmetric derivative at every , but is not symmetrically differentiable at any ; i.e. the symmetric derivative exists at rational numbers but not at irrational numbers.

Quasi-mean-value theorem

The symmetric derivative does not obey the usual mean-value theorem (of Lagrange). As a counterexample, the symmetric derivative of f(x) = |x| has the image {−1, 0, 1}, but secants for f can have a wider range of slopes; for instance, on the interval [−1, 2], the mean-value theorem would mandate that there exist a point where the (symmetric) derivative takes the value . [5]

A theorem somewhat analogous to Rolle's theorem but for the symmetric derivative was established in 1967 by C. E. Aull, who named it quasi-Rolle theorem. If f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), and f(a) = f(b) = 0, then there exist two points x, y in (a, b) such that fs(x) ≥ 0, and fs(y) ≤ 0. A lemma also established by Aull as a stepping stone to this theorem states that if f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), and additionally f(b) > f(a), then there exist a point z in (a, b) where the symmetric derivative is non-negative, or with the notation used above, fs(z) ≥ 0. Analogously, if f(b) < f(a), then there exists a point z in (a, b) where fs(z) ≤ 0. [5]

The quasi-mean-value theorem for a symmetrically differentiable function states that if f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), then there exist x, y in (a, b) such that [5] [2] :7

As an application, the quasi-mean-value theorem for f(x) = |x| on an interval containing 0 predicts that the slope of any secant of f is between −1 and 1.

If the symmetric derivative of f has the Darboux property, then the (form of the) regular mean-value theorem (of Lagrange) holds, i.e. there exists z in (a, b) such that [5]

As a consequence, if a function is continuous and its symmetric derivative is also continuous (thus has the Darboux property), then the function is differentiable in the usual sense. [5]

Generalizations

The notion generalizes to higher-order symmetric derivatives and also to n-dimensional Euclidean spaces.

The second symmetric derivative

The second symmetric derivative is defined as [6] [2] :1

If the (usual) second derivative exists, then the second symmetric derivative exists and is equal to it. [6] The second symmetric derivative may exist, however, even when the (ordinary) second derivative does not. As example, consider the sign function , which is defined by

The sign function is not continuous at zero, and therefore the second derivative for does not exist. But the second symmetric derivative exists for :

See also

Related Research Articles

In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.

<span class="mw-page-title-main">L'Hôpital's rule</span> Mathematical rule for evaluating some limits

L'Hôpital's rule or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.

In mathematics, the mean value theorem states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval.

In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by ba, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

<span class="mw-page-title-main">Taylor's theorem</span> Approximation of a function by a truncated power series

In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

<span class="mw-page-title-main">Heaviside step function</span> Indicator function of positive numbers

The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value H(0) are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

<span class="mw-page-title-main">Rolle's theorem</span> On stationary points between two equal values of a real differentiable function

In calculus, Rolle's theorem or Rolle's lemma essentially states that any real-valued differentiable function that attains equal values at two distinct points must have at least one point, somewhere between them, at which the slope of the tangent line is zero. Such a point is known as a stationary point. It is a point at which the first derivative of the function is zero. The theorem is named after Michel Rolle.

<span class="mw-page-title-main">Differentiable function</span> Mathematical function whose derivative exists

In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth and does not contain any break, angle, or cusp.

<span class="mw-page-title-main">Sign function</span> Mathematical function returning -1, 0 or 1

In mathematics, the sign function or signum function is a function that has the value −1, +1 or 0 according to whether the sign of a given real number is positive or negative, or the given number is itself zero. In mathematical notation the sign function is often represented as or .

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic.

<span class="mw-page-title-main">Bisection method</span> Algorithm for finding a zero of a function

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

In calculus, the notions of one-sided differentiability and semi-differentiability of a real-valued function f of a real variable are weaker than differentiability. Specifically, the function f is said to be right differentiable at a point a if, roughly speaking, a derivative can be defined as the function's argument x moves to a from the right, and left differentiable at a if the derivative can be defined as x moves to a from the left.

<span class="mw-page-title-main">Second derivative</span> Mathematical operation

In calculus, the second derivative, or the second-order derivative, of a function f is the derivative of the derivative of f. Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation: where a is acceleration, v is velocity, t is time, x is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position with respect to time.

<span class="mw-page-title-main">Characteristic function (probability theory)</span> Fourier transform of the probability density function

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

<span class="mw-page-title-main">Subderivative</span> Generalization of derivatives to real-valued functions

In mathematics, subderivatives generalizes the derivative to convex functions which are not necessarily differentiable. The set of subderivatives at a point is called the subdifferential at that point. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.

The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. Roughly speaking, the two operations can be thought of as inverses of each other.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References

  1. 1 2 Peter R. Mercer (2014). More Calculus of a Single Variable. Springer. p. 173. ISBN   978-1-4939-1926-0.
  2. 1 2 3 4 Thomson, Brian S. (1994). Symmetric Properties of Real Functions. Marcel Dekker. ISBN   0-8247-9230-0.
  3. 1 2 Peter D. Lax; Maria Shea Terrell (2013). Calculus With Applications. Springer. p. 213. ISBN   978-1-4614-7946-8.
  4. Shirley O. Hockett; David Bock (2005). Barron's how to Prepare for the AP Calculus . Barron's Educational Series. pp.  53. ISBN   978-0-7641-2382-5.
  5. 1 2 3 4 5 Sahoo, Prasanna; Riedel, Thomas (1998). Mean Value Theorems and Functional Equations. World Scientific. pp. 188–192. ISBN   978-981-02-3544-4.
  6. 1 2 A. Zygmund (2002). Trigonometric Series. Cambridge University Press. pp. 22–23. ISBN   978-0-521-89053-3.