# Antiderivative

Last updated

In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral [Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F' = f. [1] [2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G. [3]

## Contents

Antiderivatives are related to definite integrals through the fundamental theorem of calculus: the definite integral of a function over an interval is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.

In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). [4] The discrete equivalent of the notion of antiderivative is antidifference.

## Examples

The function ${\displaystyle F(x)={\tfrac {x^{3}}{3}}}$ is an antiderivative of ${\displaystyle f(x)=x^{2}}$, since the derivative of ${\displaystyle {\tfrac {x^{3}}{3}}}$ is ${\displaystyle x^{2}}$, and since the derivative of a constant is zero, ${\displaystyle x^{2}}$ will have an infinite number of antiderivatives, such as ${\displaystyle {\tfrac {x^{3}}{3}},{\tfrac {x^{3}}{3}}+1,{\tfrac {x^{3}}{3}}-2}$, etc. Thus, all the antiderivatives of ${\displaystyle x^{2}}$ can be obtained by changing the value of c in ${\displaystyle F(x)={\tfrac {x^{3}}{3}}+c}$, where c is an arbitrary constant known as the constant of integration. [3] Essentially, the graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value c.

More generally, the power function ${\displaystyle f(x)=x^{n}}$ has antiderivative ${\displaystyle F(x)={\tfrac {x^{n+1}}{n+1}}+c}$ if n1, and ${\displaystyle F(x)=\ln |x|+c}$ if n = 1.

In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). [4]

## Uses and properties

Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if F is an antiderivative of the integrable function f over the interval ${\displaystyle [a,b]}$, then:

${\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}$

Because of this, each of the infinitely many antiderivatives of a given function f is sometimes called the "general integral" or "indefinite integral" of f, and is written using the integral symbol with no bounds: [3]

${\displaystyle \int f(x)\,dx.}$

If F is an antiderivative of f, and the function f is defined on some interval, then every other antiderivative G of f differs from F by a constant: there exists a number c such that ${\displaystyle G(x)=F(x)+c}$ for all x. c is called the constant of integration. If the domain of F is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance

${\displaystyle F(x)={\begin{cases}-{\frac {1}{x}}+c_{1}\quad x<0\\-{\frac {1}{x}}+c_{2}\quad x>0\end{cases}}}$

is the most general antiderivative of ${\displaystyle f(x)=1/x^{2}}$ on its natural domain ${\displaystyle (-\infty ,0)\cup (0,\infty ).}$

Every continuous function f has an antiderivative, and one antiderivative F is given by the definite integral of f with variable upper boundary:

${\displaystyle F(x)=\int _{0}^{x}f(t)\,dt.}$

Varying the lower boundary produces other antiderivatives (but not necessarily all possible antiderivatives). This is another formulation of the fundamental theorem of calculus.

There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are

${\displaystyle \int e^{-x^{2}}\,dx,\qquad \int \sin x^{2}\,dx,\qquad \int {\frac {\sin x}{x}}\,dx,\qquad \int {\frac {1}{\ln x}}\,dx,\qquad \int x^{x}\,dx.}$

From left to right, the first four are the error function, the Fresnel function, the trigonometric integral, and the logarithmic integral function. For a more detailed discussion, see also Differential Galois theory.

## Techniques of integration

Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). [5] For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral.

There exist many properties and techniques for finding antiderivatives. These include, among others:

Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals.

## Of non-continuous functions

Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that:

• Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives.
• In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable.

Assuming that the domains of the functions are open intervals:

• A necessary, but not sufficient, condition for a function f to have an antiderivative is that f have the intermediate value property. That is, if [a, b] is a subinterval of the domain of f and y is any real number between f(a) and f(b), then there exists a c between a and b such that f(c) = y. This is a consequence of Darboux's theorem.
• The set of discontinuities of f must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function f having an antiderivative, which has the given set as its set of discontinuities.
• If f has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
• If f has an antiderivative F on a closed interval ${\displaystyle [a,b]}$, then for any choice of partition ${\displaystyle a=x_{0} if one chooses sample points ${\displaystyle x_{i}^{*}\in [x_{i-1},x_{i}]}$ as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value ${\displaystyle F(b)-F(a)}$.
{\displaystyle {\begin{aligned}\sum _{i=1}^{n}f(x_{i}^{*})(x_{i}-x_{i-1})&=\sum _{i=1}^{n}[F(x_{i})-F(x_{i-1})]\\&=F(x_{n})-F(x_{0})=F(b)-F(a)\end{aligned}}}
However if f is unbounded, or if f is bounded but the set of discontinuities of f has positive Lebesgue measure, a different choice of sample points ${\displaystyle x_{i}^{*}}$ may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.

### Some examples

1. The function
${\displaystyle f(x)=2x\sin \left({\frac {1}{x}}\right)-\cos \left({\frac {1}{x}}\right)}$

with ${\displaystyle f(0)=0}$ is not continuous at ${\displaystyle x=0}$ but has the antiderivative

${\displaystyle F(x)=x^{2}\sin \left({\frac {1}{x}}\right)}$
with ${\displaystyle F(0)=0}$. Since f is bounded on closed finite intervals and is only discontinuous at 0, the antiderivative F may be obtained by integration: ${\displaystyle F(x)=\int _{0}^{x}f(t)\,dt}$.
2. The function
${\displaystyle f(x)=2x\sin \left({\frac {1}{x^{2}}}\right)-{\frac {2}{x}}\cos \left({\frac {1}{x^{2}}}\right)}$

with ${\displaystyle f(0)=0}$ is not continuous at ${\displaystyle x=0}$ but has the antiderivative

${\displaystyle F(x)=x^{2}\sin \left({\frac {1}{x^{2}}}\right)}$
with ${\displaystyle F(0)=0}$. Unlike Example 1, f(x) is unbounded in any interval containing 0, so the Riemann integral is undefined.
3. If f(x) is the function in Example 1 and F is its antiderivative, and ${\displaystyle \{x_{n}\}_{n\geq 1}}$ is a dense countable subset of the open interval ${\displaystyle (-1,1),}$ then the function
${\displaystyle g(x)=\sum _{n=1}^{\infty }{\frac {f(x-x_{n})}{2^{n}}}}$

has an antiderivative

${\displaystyle G(x)=\sum _{n=1}^{\infty }{\frac {F(x-x_{n})}{2^{n}}}.}$
The set of discontinuities of g is precisely the set ${\displaystyle \{x_{n}\}_{n\geq 1}}$. Since g is bounded on closed finite intervals and the set of discontinuities has measure 0, the antiderivative G may be found by integration.
4. Let ${\displaystyle \{x_{n}\}_{n\geq 1}}$ be a dense countable subset of the open interval ${\displaystyle (-1,1).}$ Consider the everywhere continuous strictly increasing function
${\displaystyle F(x)=\sum _{n=1}^{\infty }{\frac {1}{2^{n}}}(x-x_{n})^{1/3}.}$

It can be shown that

${\displaystyle F'(x)=\sum _{n=1}^{\infty }{\frac {1}{3\cdot 2^{n}}}(x-x_{n})^{-2/3}}$

for all values x where the series converges, and that the graph of F(x) has vertical tangent lines at all other values of x. In particular the graph has vertical tangent lines at all points in the set ${\displaystyle \{x_{n}\}_{n\geq 1}}$.

Moreover ${\displaystyle F(x)\geq 0}$ for all x where the derivative is defined. It follows that the inverse function ${\displaystyle G=F^{-1}}$ is differentiable everywhere and that

${\displaystyle g(x)=G'(x)=0}$

for all x in the set ${\displaystyle \{F(x_{n})\}_{n\geq 1}}$ which is dense in the interval ${\displaystyle [F(-1),F(1)].}$ Thus g has an antiderivative G. On the other hand, it can not be true that

${\displaystyle \int _{F(-1)}^{F(1)}g(x)\,dx=GF(1)-GF(-1)=2,}$
since for any partition of ${\displaystyle [F(-1),F(1)]}$, one can choose sample points for the Riemann sum from the set ${\displaystyle \{F(x_{n})\}_{n\geq 1}}$, giving a value of 0 for the sum. It follows that g has a set of discontinuities of positive Lebesgue measure. Figure 1 on the right shows an approximation to the graph of g(x) where ${\displaystyle \{x_{n}=\cos(n)\}_{n\geq 1}}$ and the series is truncated to 8 terms. Figure 2 shows the graph of an approximation to the antiderivative G(x), also truncated to 8 terms. On the other hand if the Riemann integral is replaced by the Lebesgue integral, then Fatou's lemma or the dominated convergence theorem shows that g does satisfy the fundamental theorem of calculus in that context.
5. In Examples 3 and 4, the sets of discontinuities of the functions g are dense only in a finite open interval ${\displaystyle (a,b).}$ However, these examples can be easily modified so as to have sets of discontinuities which are dense on the entire real line ${\displaystyle (-\infty ,\infty )}$. Let
${\displaystyle \lambda (x)={\frac {a+b}{2}}+{\frac {b-a}{\pi }}\tan ^{-1}x.}$
Then ${\displaystyle g(\lambda (x))\lambda '(x)}$ has a dense set of discontinuities on ${\displaystyle (-\infty ,\infty )}$ and has antiderivative ${\displaystyle G\cdot \lambda .}$
6. Using a similar method as in Example 5, one can modify g in Example 4 so as to vanish at all rational numbers. If one uses a naive version of the Riemann integral defined as the limit of left-hand or right-hand Riemann sums over regular partitions, one will obtain that the integral of such a function g over an interval ${\displaystyle [a,b]}$ is 0 whenever a and b are both rational, instead of ${\displaystyle G(b)-G(a)}$. Thus the fundamental theorem of calculus will fail spectacularly.
7. A function which has an antiderivative may still fail to be Riemann integrable. The derivative of Volterra's function is an example.

## Notes

1. Antiderivatives are also called general integrals, and sometimes integrals. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to definite integrals. When the word integral is used without additional specification, the reader is supposed to deduce from the context whether it refers to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. This article adopts the latter approach. In English A-Level Mathematics textbooks one can find the term complete primitive - L. Bostock and S. Chandler (1978) Pure Mathematics 1; The solution of a differential equation including the arbitrary constant is called the general solution (or sometimes the complete primitive).

## Related Research Articles

In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with differentiation, integration is a fundamental, essential operation of calculus, and serves as a tool to solve problems in mathematics and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.

In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number . Similarly, an improper integral of a function, , is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

In calculus, the constant of integration, often denoted by , is a constant term added to an antiderivative of a function to indicate that the indefinite integral of , on a connected domain, is only defined up to an additive constant. This constant expresses an ambiguity inherent in the construction of antiderivatives.

In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. This article focuses on calculation of definite integrals.

In calculus, integration by substitution, also known as u-substitution or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards".

Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives.

In mathematics, the Riemann–Stieltjes integral is a generalization of the Riemann integral, named after Bernhard Riemann and Thomas Joannes Stieltjes. The definition of this integral was first published in 1894 by Stieltjes. It serves as an instructive and useful precursor of the Lebesgue integral, and an invaluable tool in unifying equivalent forms of statistical theorems that apply to discrete and continuous probability.

In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number, , , or in some instances as both endpoints approach limits. Such an integral is often written symbolically just like a standard definite integral, in some cases with infinity as a limit of integration.

In mathematics, the Henstock–Kurzweil integral or generalized Riemann integral or gauge integral – also known as the (narrow) Denjoy integral, Luzin integral or Perron integral, but not to be confused with the more general wide Denjoy integral – is one of a number of definitions of the integral of a function. It is a generalization of the Riemann integral, and in some situations is more general than the Lebesgue integral. In particular, a function is Lebesgue integrable if and only if the function and its absolute value are Henstock–Kurzweil integrable.

In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.

In mathematics, the Riemann–Liouville integral associates with a real function another function Iαf of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, Iαf is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.

In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form

In real analysis, a branch of mathematics, the Darboux integral is constructed using Darboux sums and is one possible definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. The definition of the Darboux integral has the advantage of being easier to apply in computations or proofs than that of the Riemann integral. Consequently, introductory textbooks on calculus and real analysis often develop Riemann integration using the Darboux integral, rather than the true Riemann integral. Moreover, the definition is readily extended to defining Riemann–Stieltjes integration. Darboux integrals are named after their inventor, Gaston Darboux.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:

In mathematics, a nonelementary antiderivative of a given elementary function is an antiderivative that is, itself, not an elementary function. A theorem by Liouville in 1835 provided the first proof that nonelementary antiderivatives exist. This theorem also provides a basis for the Risch algorithm for determining which elementary functions have elementary antiderivatives.

The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. The two operations turn out to be basically inverses of each other apart from a constant value which depends where one starts to compute area.

In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the x-axis. The Lebesgue integral extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

## References

1. Stewart, James (2008). (6th ed.). Brooks/Cole. ISBN   0-495-01166-5.
2. Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN   0-547-16702-4.
3. "Compendium of Mathematical Symbols". Math Vault. 2020-03-01. Retrieved 2020-08-18.
4. "4.9: Antiderivatives". Mathematics LibreTexts. 2017-04-27. Retrieved 2020-08-18.
5. "Antiderivative and Indefinite Integration | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2020-08-18.