Part of a series of articles about |

Calculus |
---|

In mathematics, an **integral** assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called **integration**. Along with differentiation, integration is a fundamental, essential operation of calculus,^{ [lower-alpha 1] } and serves as a tool to solve problems in mathematics and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.

- History
- Pre-calculus integration
- Leibniz and Newton
- Formalization
- Historical notation
- First use of the term
- Terminology and notation
- Interpretations
- Formal definitions
- Riemann integral
- Lebesgue integral
- Other integrals
- Properties
- Linearity
- Inequalities
- Conventions
- Fundamental theorem of calculus
- First theorem
- Second theorem
- Extensions
- Improper integrals
- Multiple integration
- Line integrals and surface integrals
- Contour integrals
- Integrals of differential forms
- Summations
- Applications
- Computation
- Analytical
- Symbolic
- Numerical
- Mechanical
- Geometrical
- See also
- Notes
- References
- Bibliography
- External links
- Online books

The integrals enumerated here are those termed **definite integrals**, which can be interpreted formally as the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function. In this case, they are called **indefinite integrals**. The fundamental theorem of calculus relates definite integrals with differentiation and provides a method to compute the definite integral of a function when its antiderivative is known.

Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into thin vertical slabs.

Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting the two endpoints of the interval. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.

The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (*ca.* 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known.^{ [1] } This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.^{ [2] }

A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.^{ [3] }

In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers.^{ [4] } He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.^{ [5] }

The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of Indivisibles, and work by Fermat, began to lay the foundations of modern calculus,^{ [6] } with Cavalieri computing the integrals of *x*^{n} up to degree *n* = 9 in Cavalieri's quadrature formula.^{ [7] } Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus.^{ [8] } Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.^{ [9] }

The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton.^{ [10] } The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.

While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities".^{ [11] } Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann.^{ [12] } Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.

The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675.^{ [13] } He adapted the integral symbol, **∫**, from the letter *ſ* (long s), standing for *summa* (written as *ſumma*; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in *Mémoires* of the French Academy around 1819–20, reprinted in his book of 1822.^{ [14] }

Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or *x*′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.^{ [15] }

The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur".^{ [16] }

In general, the integral of a real-valued function *f*(*x*) with respect to a real variable *x* on an interval [*a*, *b*] is written as

The integral sign ∫ represents integration. (In modern Arabic mathematical notation, a reflected integral symbol is used.^{ [17] }) The symbol *dx*, called the differential of the variable *x*, indicates that the variable of integration is *x*. The function *f*(*x*) is called the integrand, the points *a* and *b* are called the limits of integration, and the integral is said to be over the interval [*a*, *b*], called the interval of integration.^{ [18] } A function is said to be *integrable* if its integral over its domain is finite, and when limits are specified, the integral is called a definite integral.

When the limits are omitted, as in

the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand.^{ [19] } The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article).

In advanced settings, it is not uncommon to leave out *dx* when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof.^{ [20] }

Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation.

For example, to find the area of the region bounded by the graph of the function *f*(*x*) = √*x* between *x* = 0 and *x* = 1, one can cross the interval in five steps (0, 1/5, 2/5, ..., 1), then fill a rectangular using the right end height of each piece (thus √0, √1/5, √2/5, ..., √1) and sum their areas to get an approximation of

which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increase to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes

which means 2/3 is the result of a weighted sum of function values, √*x*, multiplied by the infinitesimal step widths, denoted by *dx*, on the interval [0, 1].

There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals.

The Riemann integral is defined in terms of Riemann sums of functions with respect to *tagged partitions* of an interval.^{ [21] } A tagged partition of a closed interval [*a*, *b*] on the real line is a finite sequence

This partitions the interval [*a*, *b*] into n sub-intervals [*x*_{i−1}, *x*_{i}] indexed by i, each of which is "tagged" with a distinguished point *t*_{i} ∈ [*x*_{i−1}, *x*_{i}]. A *Riemann sum* of a function f with respect to such a tagged partition is defined as

thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the width of sub-interval, Δ_{i} = *x*_{i}−*x*_{i−1}. The *mesh* of such a tagged partition is the width of the largest sub-interval formed by the partition, max_{i=1...n} Δ_{i}. The *Riemann integral* of a function f over the interval [*a*, *b*] is equal to S if:^{ [22] }

- For all there exists such that, for any tagged partition with mesh less than ,

When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.

It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated.^{ [23] }

Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:^{ [24] }

I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.

As Folland puts it, "To compute the Riemann integral of f, one partitions the domain [*a*, *b*] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f ".^{ [25] } The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure *μ*(*A*) of an interval *A* = [*a*, *b*] is its width, *b* − *a*, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist.^{ [26] } In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.

Using the "partitioning the range of f " philosophy, the integral of a non-negative function *f* : **R** → **R** should be the sum over t of the areas between a thin horizontal strip between *y* = *t* and *y* = *t* + *dt*. This area is just *μ*{ *x* : *f*(*x*) > *t*} *dt*. Let *f*^{∗}(*t*) = *μ*{ *x* : *f*(*x*) > *t* }. The Lebesgue integral of f is then defined by

where the integral on the right is an ordinary improper Riemann integral (*f*^{∗} is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral).^{ [27] } For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.

A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite:^{ [28] }

In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:^{ [29] }

where

Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:

- The Darboux integral, which is defined by Darboux sums (restricted Riemann sums) yet is equivalent to the Riemann integral - a function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being easier to define than Riemann integrals.
- The Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable.
- The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals.
- The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures.
- The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933.
- The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock.
- The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion.
- The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation.
- The rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion.
- The Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953.

The collection of Riemann-integrable functions on a closed interval [*a*, *b*] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration

is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals:^{ [30] }

Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral

is a linear functional on this vector space, so that:^{ [29] }

More generally, consider the vector space of all measurable functions on a measure space (*E*,*μ*), taking values in a locally compact complete topological vector space V over a locally compact topological field *K*, *f* : *E* → *V*. Then one may define an abstract integration map assigning to each function f an element of V or the symbol *∞*,

that is compatible with linear combinations.^{ [31] } In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is **R**, **C**, or a finite extension of the field **Q**_{p} of p-adic numbers, and V is a finite-dimensional vector space over K, and when *K* = **C** and V is a complex Hilbert space.

Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See Hildebrandt 1953 for an axiomatic characterization of the integral.

A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [*a*, *b*] and can be generalized to other notions of integral (Lebesgue and Daniell).

*Upper and lower bounds.*An integrable function f on [*a*,*b*], is necessarily bounded on that interval. Thus there are real numbers m and M so that*m*≤*f*(*x*) ≤*M*for all x in [*a*,*b*]. Since the lower and upper sums of f over [*a*,*b*] are therefore bounded by, respectively,*m*(*b*−*a*) and*M*(*b*−*a*), it follows that*Inequalities between functions.*^{ [32] }If*f*(*x*) ≤*g*(*x*) for each x in [*a*,*b*] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. ThusThis is a generalization of the above inequalities, as*M*(*b*−*a*) is the integral of the constant function with value M over [*a*,*b*]. In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if*f*(*x*) <*g*(*x*) for each x in [*a*,*b*], then*Subintervals.*If [*c*,*d*] is a subinterval of [*a*,*b*] and*f*(*x*) is non-negative for all x, then*Products and absolute values of functions.*If f and g are two functions, then we may consider their pointwise products and powers, and absolute values:If f is Riemann-integrable on [*a*,*b*] then the same is true for |*f*|, andMoreover, if f and g are both Riemann-integrable then*fg*is also Riemann-integrable, andThis inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [*a*,*b*].*Hölder's inequality*.^{ [33] }Suppose that p and q are two real numbers, 1 ≤*p*,*q*≤ ∞ with 1/*p*+ 1/*q*= 1, and f and g are two Riemann-integrable functions. Then the functions |*f*|^{p}and |*g*|^{q}are also integrable and the following Hölder's inequality holds:For*p*=*q*= 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.*Minkowski inequality*.^{ [33] }Suppose that*p*≥ 1 is a real number and f and g are Riemann-integrable functions. Then |*f*|^{p}, |*g*|^{p}and |*f*+*g*|^{p}are also Riemann-integrable and the following Minkowski inequality holds:An analogue of this inequality for Lebesgue integral is used in construction of L^{p}spaces.

In this section, f is a real-valued Riemann-integrable function. The integral

over an interval [*a*, *b*] is defined if *a*<*b*. This means that the upper and lower sums of the function f are evaluated on a partition *a* = *x*_{0} ≤ *x*_{1} ≤ . . . ≤ *x*_{n} = *b* whose values *x*_{i} are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [*x*_{ i} , *x*_{ i +1}] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if *a*>*b*:^{ [18] }

With *a* = *b*, this implies:

The first convention is necessary in consideration of taking integrals over subintervals of [*a*, *b*]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [*a*, *b*] implies that f is integrable on any subinterval [*c*, *d*], but in particular integrals have the property that if c is any element of [*a*, *b*], then:^{ [30] }

With the first convention, the resulting relation

is then well-defined for any cyclic permutation of a, b, and c.

The *fundamental theorem of calculus* is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved.^{ [34] } An important consequence, sometimes called the *second fundamental theorem of calculus*, allows one to compute integrals by using an antiderivative of the function to be integrated.^{ [35] }

Let f be a continuous real-valued function defined on a closed interval [*a*, *b*]. Let F be the function defined, for all x in [*a*, *b*], by

Then, F is continuous on [*a*, *b*], differentiable on the open interval (*a*, *b*), and

for all x in (*a*, *b*).

Let f be a real-valued function defined on a closed interval [*a*, *b*] that admits an antiderivative F on [*a*, *b*]. That is, f and F are functions such that for all x in [*a*, *b*],

If f is integrable on [*a*, *b*] then

A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.

If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity:^{ [36] }

If the integrand is only defined or finite on a half-open interval, for instance (*a*, *b*], then again a limit may provide a finite result:^{ [37] }

That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.

Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the *x*-axis, the *double integral* of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain.^{ [38] } For example, a function in two dimensions depends on two real variables, *x* and *y*, and the integral of a function *f* over the rectangle *R* given as the Cartesian product of two intervals can be written

where the differential *dA* indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of *z* = *f*(*x*,*y*) over the domain *R*.^{ [39] } Under suitable conditions (e.g., if *f* is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral^{ [40] }

This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over *R* uses a double integral sign:^{ [39] }

Integration over more general domains is possible. The integral of a function *f*, with respect to volume, over an *n-*dimensional region *D* of is denoted by symbols such as:

The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.

A *line integral* (sometimes called a *path integral*) is an integral where the function to be integrated is evaluated along a curve.^{ [41] } Various different line integrals are in use. In the case of a closed curve it is also called a *contour integral*.

The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve).^{ [42] } This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, **F**, multiplied by displacement, **s**, may be expressed (in terms of vector quantities) as:^{ [43] }

For an object moving along a path *C* in a vector field **F** such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from **s** to **s** + *d***s**. This gives the line integral^{ [44] }

A *surface integral* generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.^{ [45] }

For an example of applications of surface integrals, consider a vector field **v** on a surface *S*; that is, for each point *x* in *S*, **v**(*x*) is a vector. Imagine that a fluid flows through *S*, such that **v**(*x*) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through *S* in unit amount of time. To find the flux, one need to take the dot product of **v** with the unit surface normal to *S* at each point, which will give a scalar field, which is integrated over the surface:^{ [46] }

The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.

In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve in the complex plane, the integral is denoted as follows

This is known as a contour integral.

A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as:

where *E*, *F*, *G* are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials *dx*, *dy*, *dz* measure infinitesimal oriented lengths parallel to the three coordinate axes.

A differential two-form is a sum of the form

Here the basic two-forms measure oriented areas parallel to the coordinate two-planes. The symbol denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of .

Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem.

The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time scale calculus.

Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range.^{ [47] } Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not.^{ [48] }

Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral.^{ [49] } The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder, , where is the radius. In the case of a simple disc created by rotating a curve about the *x*-axis, the radius is given by *f*(*x*), and its height is the differential *dx*. Using an integral with bounds *a* and *b*, the volume of the disc is equal to:^{ [50] }

Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval is given by:

where is the velocity expressed as a function of time.^{ [51] } The work done by a force (given as a function of position) from an initial position to a final position is:^{ [52] }

Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states.

The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let *f*(*x*) be the function of x to be integrated over a given interval [*a*, *b*]. Then, find an antiderivative of f; that is, a function F such that *F*′ = *f* on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus,

Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions.

Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.

Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.

Specific results which have been worked out by various techniques are collected in the list of integrals.

Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple.

A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary, and to compute it if it is. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.

Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending the Risch's algorithm to include such functions is possible but challenging and has been an active research subject.

More recently a new approach has emerged, using *D*-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are *D*-finite, and the integral of a *D*-finite function is also a *D*-finite function. This provides an algorithm to express the antiderivative of a *D*-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a *D*-function as the sum of a series given by the first coefficients, and provides an algorithm to compute any coefficient.

Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation.^{ [53] } The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function.^{ [54] }

Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree *n* polynomial. This polynomial is chosen to interpolate the values of the function on the interval.^{ [55] } Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials.

Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by *T*(*h*_{0}), *T*(*h*_{1}), and so on, where *h*_{k+1} is half of *h*_{k}. For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to *T*(0). Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials.^{ [56] } An n-point Gaussian method is exact for polynomials of degree up to 2*n* − 1.

The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.^{ [57] }

The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged.

Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square.

- ↑ Integral calculus is a very well established mathematical discipline for which there are many sources. See Apostol 1967 and Anton, Bivens & Davis 2016, for example.

In calculus, an **antiderivative**, **inverse derivative**, **primitive function**, **primitive integral** or **indefinite integral** of a function *f* is a differentiable function *F* whose derivative is equal to the original function *f*. This can be stated symbolically as *F' * = *f*. The process of solving for antiderivatives is called **antidifferentiation**, and its opposite operation is called *differentiation*, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G.

**Calculus**, originally called **infinitesimal calculus** or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations.

In the branch of mathematics known as real analysis, the **Riemann integral**, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration.

In mathematics, an infinite series of numbers is said to **converge absolutely** if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to **converge absolutely** if for some real number . Similarly, an improper integral of a function, , is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In calculus, and more generally in mathematical analysis, **integration by parts** or **partial integration** is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

In mathematics, a **Riemann sum** is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is approximating the area of functions or lines on a graph, but also the length of curves and other approximations.

In the mathematical fields of differential geometry and tensor calculus, **differential forms** are an approach to multivariable calculus that is independent of coordinates. Differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics.

In mathematics, the **Riemann–Stieltjes integral** is a generalization of the Riemann integral, named after Bernhard Riemann and Thomas Joannes Stieltjes. The definition of this integral was first published in 1894 by Stieltjes. It serves as an instructive and useful precursor of the Lebesgue integral, and an invaluable tool in unifying equivalent forms of statistical theorems that apply to discrete and continuous probability.

In mathematical analysis, an **improper integral** is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number, , , or in some instances as both endpoints approach limits. Such an integral is often written symbolically just like a standard definite integral, in some cases with *infinity* as a limit of integration.

In measure theory, Lebesgue's **dominated convergence theorem** provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the *L*^{1} norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration.

In measure-theoretic analysis and related branches of mathematics, **Lebesgue–Stieltjes integration** generalizes Riemann–Stieltjes and Lebesgue integration, preserving the many advantages of the former in a more general measure-theoretic framework. The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind.

In mathematics, the **Henstock–Kurzweil integral** or **generalized Riemann integral** or **gauge integral** – also known as the (narrow) **Denjoy integral**, **Luzin integral** or **Perron integral**, but not to be confused with the more general wide Denjoy integral – is one of a number of definitions of the integral of a function. It is a generalization of the Riemann integral, and in some situations is more general than the Lebesgue integral. In particular, a function is Lebesgue integrable if and only if the function and its absolute value are Henstock–Kurzweil integrable.

In mathematics, the **Riemann–Liouville integral** associates with a real function another function I^{α} f of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, I^{α} f is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the **Euler transform**, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.

In real analysis, a branch of mathematics, the **Darboux integral** is constructed using **Darboux sums** and is one possible definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. The definition of the Darboux integral has the advantage of being easier to apply in computations or proofs than that of the Riemann integral. Consequently, introductory textbooks on calculus and real analysis often develop Riemann integration using the Darboux integral, rather than the true Riemann integral. Moreover, the definition is readily extended to defining Riemann–Stieltjes integration. Darboux integrals are named after their inventor, Gaston Darboux.

In mathematics, the **Riemann–Lebesgue lemma**, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an *L*^{1} function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.

In mathematics, the **regulated integral** is a definition of integration for regulated functions, which are defined to be uniform limits of step functions. The use of the regulated integral instead of the Riemann integral has been advocated by Nicolas Bourbaki and Jean Dieudonné.

A **product integral** is any product-based counterpart of the usual sum-based integral of calculus. The first product integral was developed by the mathematician Vito Volterra in 1887 to solve systems of linear differential equations. Other examples of product integrals are the geometric integral, the bigeometric integral, and some other integrals of non-Newtonian calculus.

The **fundamental theorem of calculus** is a theorem that links the concept of differentiating a function with the concept of integrating a function.

In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the *x*-axis. The **Lebesgue integral** extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined.

*Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.*

- ↑ Burton 2011 , p. 117.
- ↑ Heath 2002.
- ↑ Katz 2009 , pp. 201–204.
- ↑ Katz 2009 , pp. 284–285.
- ↑ Katz 2009 , pp. 305–306.
- ↑ Katz 2009 , pp. 516–517.
- ↑ Struik 1986 , pp. 215–216.
- ↑ Katz 2009 , pp. 536–537.
- ↑ Burton 2011 , pp. 385–386.
- ↑ Stillwell 1989 , p. 131.
- ↑ Katz 2009 , pp. 628–629.
- ↑ Katz 2009 , p. 785.
- ↑ Burton 2011 , p. 414; Leibniz 1899 , p. 154.
- ↑ Cajori 1929 , pp. 249–250; Fourier 1822 , §231.
- ↑ Cajori 1929 , p. 246.
- ↑ Cajori 1929 , p. 182.
- ↑ W3C 2006.
- 1 2 Apostol 1967 , p. 74.
- ↑ Anton, Bivens & Davis 2016 , p. 259.
- ↑ Apostol 1967 , p. 69.
- ↑ Anton, Bivens & Davis 2016 , pp. 286−287.
- ↑ Krantz 1991 , p. 173.
- ↑ Rudin 1987 , p. 5.
- ↑ Siegmund-Schultze 2008 , p. 796.
- ↑ Folland 1999 , pp. 57–58.
- ↑ Bourbaki 2004 , p. IV.43.
- ↑ Lieb & Loss 2001 , p. 14.
- ↑ Folland 1999 , p. 53.
- 1 2 Rudin 1987 , p. 25.
- 1 2 Apostol 1967 , p. 80.
- ↑ Rudin 1987 , p. 54.
- ↑ Apostol 1967 , p. 81.
- 1 2 Rudin 1987 , p. 63.
- ↑ Apostol 1967 , p. 202.
- ↑ Apostol 1967 , p. 205.
- ↑ Apostol 1967 , p. 416.
- ↑ Apostol 1967 , p. 418.
- ↑ Anton, Bivens & Davis 2016 , p. 895.
- 1 2 Anton, Bivens & Davis 2016 , p. 896.
- ↑ Anton, Bivens & Davis 2016 , p. 897.
- ↑ Anton, Bivens & Davis 2016 , p. 980.
- ↑ Anton, Bivens & Davis 2016 , p. 981.
- ↑ Anton, Bivens & Davis 2016 , p. 697.
- ↑ Anton, Bivens & Davis 2016 , p. 991.
- ↑ Anton, Bivens & Davis 2016 , p. 1014.
- ↑ Anton, Bivens & Davis 2016 , p. 1024.
- ↑ Feller 1966 , p. 1.
- ↑ Feller 1966 , p. 3.
- ↑ Apostol 1967 , pp. 88–89.
- ↑ Apostol 1967 , pp. 111–114.
- ↑ Anton, Bivens & Davis 2016 , p. 306.
- ↑ Apostol 1967 , p. 116.
- ↑ Dahlquist & Björck 2008 , pp. 519–520.
- ↑ Dahlquist & Björck 2008 , pp. 522–524.
- ↑ Kahaner, Moler & Nash 1989 , p. 144.
- ↑ Kahaner, Moler & Nash 1989 , p. 147.
- ↑ Kahaner, Moler & Nash 1989 , pp. 139–140.

- Anton, Howard; Bivens, Irl C.; Davis, Stephen (2016),
*Calculus: Early Transcendentals*(11th ed.), John Wiley & Sons, ISBN 978-1-118-88382-2 - Apostol, Tom M. (1967),
*Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra*(2nd ed.), Wiley, ISBN 978-0-471-00005-1 - Bourbaki, Nicolas (2004),
*Integration I*, Springer-Verlag, ISBN 3-540-41129-1 . In particular chapters III and IV. - Burton, David M. (2011),
*The History of Mathematics: An Introduction*(7th ed.), McGraw-Hill, ISBN 978-0-07-338315-6 - Cajori, Florian (1929),
*A History Of Mathematical Notations Volume II*, Open Court Publishing, ISBN 978-0-486-67766-8 - Dahlquist, Germund; Björck, Åke (2008), "Chapter 5: Numerical Integration",
*Numerical Methods in Scientific Computing, Volume I*, Philadelphia: SIAM, archived from the original on 2007-06-15 - Feller, William (1966),
*An introduction to probability theory and its applications*, John Wiley & Sons - Folland, Gerald B. (1999),
*Real Analysis: Modern Techniques and Their Applications*(2nd ed.), John Wiley & Sons, ISBN 0-471-31716-0 - Fourier, Jean Baptiste Joseph (1822),
*Théorie analytique de la chaleur*, Chez Firmin Didot, père et fils, p. §231

Available in translation as Fourier, Joseph (1878),*The analytical theory of heat*, Freeman, Alexander (trans.), Cambridge University Press, pp. 200–201 - Heath, T. L., ed. (2002),
*The Works of Archimedes*, Dover, ISBN 978-0-486-42084-4

(Originally published by Cambridge University Press, 1897, based on J. L. Heiberg's Greek version.) - Hildebrandt, T. H. (1953), "Integration in abstract spaces",
*Bulletin of the American Mathematical Society*,**59**(2): 111–139, doi: 10.1090/S0002-9904-1953-09694-X , ISSN 0273-0979 - Kahaner, David; Moler, Cleve; Nash, Stephen (1989), "Chapter 5: Numerical Quadrature",
*Numerical Methods and Software*, Prentice Hall, ISBN 978-0-13-627258-8 - Kallio, Bruce Victor (1966),
*A History of the Definite Integral*(PDF) (M.A. thesis), University of British Columbia, archived from the original (PDF) on 2014-03-05, retrieved 2014-02-28 - Katz, Victor J. (2009),
*A History of Mathematics: An Introduction*, Addison-Wesley, ISBN 0-321-38700-7 - Krantz, Steven G. (1991),
*Real Analysis and Foundations*, CRC Press, ISBN 0-8493-7156-2 - Leibniz, Gottfried Wilhelm (1899), Gerhardt, Karl Immanuel (ed.),
*Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band*, Berlin: Mayer & Müller - Lieb, Elliott; Loss, Michael (2001),
*Analysis*, Graduate Studies in Mathematics,**14**(2nd ed.), American Mathematical Society, ISBN 978-0821827833 - Rudin, Walter (1987), "Chapter 1: Abstract Integration",
*Real and Complex Analysis*(International ed.), McGraw-Hill, ISBN 978-0-07-100276-9 - Saks, Stanisław (1964),
*Theory of the integral*(English translation by L. C. Young. With two additional notes by Stefan Banach. Second revised ed.), New York: Dover - Siegmund-Schultze, Reinhard (2008), "Henri Lebesgue", in Timothy Gowers; June Barrow-Green; Imre Leader (eds.),
*Princeton Companion to Mathematics*, Princeton University Press, ISBN 978-0-691-11880-2 . - Stillwell, John (1989),
*Mathematics and Its History*, Springer, ISBN 0-387-96981-0 - Stoer, Josef; Bulirsch, Roland (2002), "Topics in Integration",
*Introduction to Numerical Analysis*(3rd ed.), Springer, ISBN 978-0-387-95452-3 . - Struik, Dirk Jan, ed. (1986),
*A Source Book in Mathematics, 1200-1800*, Princeton, New Jersey: Princeton University Press, ISBN 0-691-08404-1 - W3C (2006),
*Arabic mathematical notation*

Wikibooks has a book on the topic of: Calculus |

- "Integral",
*Encyclopedia of Mathematics*, EMS Press, 2001 [1994] - Online Integral Calculator, Wolfram Alpha.

- Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin
- Stroyan, K. D., A Brief Introduction to Infinitesimal Calculus, University of Iowa
- Mauch, Sean,
*Sean's Applied Math Book*, CIT, an online textbook that includes a complete introduction to calculus - Crowell, Benjamin,
*Calculus*, Fullerton College, an online textbook - Garrett, Paul, Notes on First-Year Calculus
- Hussain, Faraz, Understanding Calculus, an online textbook
- Johnson, William Woolsey (1909) Elementary Treatise on Integral Calculus, link from HathiTrust.
- Kowalk, W. P.,
*Integration Theory*, University of Oldenburg. A new concept to an old problem. Online textbook - Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus
- Numerical Methods of Integration at
*Holistic Numerical Methods Institute* - P. S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) — a cookbook of definite integral techniques

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.