Part of a series of articles about |
Calculus |
---|
In calculus, interchange of the order of integration is a methodology that transforms iterated integrals (or multiple integrals through the use of Fubini's theorem) of functions into other, hopefully simpler, integrals by changing the order in which the integrations are performed. In some cases, the order of integration can be validly interchanged; in others it cannot.
The problem for examination is evaluation of an integral of the form
where D is some two-dimensional area in the xy–plane. For some functions f straightforward integration is feasible, but where that is not true, the integral can sometimes be reduced to simpler form by changing the order of integration. The difficulty with this interchange is determining the change in description of the domain D.
The method also is applicable to other multiple integrals. [1] [2]
Sometimes, even though a full evaluation is difficult, or perhaps requires a numerical integration, a double integral can be reduced to a single integration, as illustrated next. Reduction to a single integration makes a numerical evaluation much easier and more efficient.
Consider the iterated integral
In this expression, the second integral is calculated first with respect to y and x is held constant—a strip of width dx is integrated first over the y-direction (a strip of width dx in the x direction is integrated with respect to the y variable across the y direction), adding up an infinite amount of rectangles of width dy along the y-axis. This forms a three dimensional slice dx wide along the x-axis, from y=a to y=x along the y-axis, and in the z direction z=h(y). Notice that if the thickness dx is infinitesimal, x varies only infinitesimally on the slice. We can assume that x is constant. [3] This integration is as shown in the left panel of Figure 1, but is inconvenient especially when the function h(y) is not easily integrated. The integral can be reduced to a single integration by reversing the order of integration as shown in the right panel of the figure. To accomplish this interchange of variables, the strip of width dy is first integrated from the line x = y to the limit x = z, and then the result is integrated from y = a to y = z, resulting in:
This result can be seen to be an example of the formula for integration by parts, as stated below: [4]
Substitute:
Which gives the result.
For application to principal-value integrals, see Whittaker and Watson, [5] Gakhov, [6] Lu, [7] or Zwillinger. [8] See also the discussion of the Poincaré-Bertrand transformation in Obolashvili. [9] An example where the order of integration cannot be exchanged is given by Kanwal: [10]
while:
The second form is evaluated using a partial fraction expansion and an evaluation using the Sokhotski–Plemelj formula: [11]
The notation indicates a Cauchy principal value. See Kanwal. [10]
A discussion of the basis for reversing the order of integration is found in the book Fourier Analysis by T.W. Körner. [12] He introduces his discussion with an example where interchange of integration leads to two different answers because the conditions of Theorem II below are not satisfied. Here is the example:
Two basic theorems governing admissibility of the interchange are quoted below from Chaudhry and Zubair: [13]
Theorem I — Let f(x, y) be a continuous function of constant sign defined for a ≤ x < ∞, c ≤ y < ∞, and let the integrals
Theorem II — Let f(x, y) be continuous for a ≤ x < ∞, c ≤ y < ∞, and let the integrals
The most important theorem for the applications is quoted from Protter and Morrey: [14]
Theorem — Suppose F is a region given by where p and q are continuous and p(x) ≤ q(x) for a ≤ x ≤ b. Suppose that f(x, y) is continuous on F. Then
In other words, both iterated integrals, when computable, are equal to the double integral and therefore equal to each other.
In mathematics, convolution is a mathematical operation on two functions that produces a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result. The integral is evaluated for all values of shift, producing the convolution function.
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration started as a method to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Today integration is used in a wide variety of scientific fields.
In mathematical analysis, the Dirac delta distribution, also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions is the pointwise product of their Fourier transforms. More generally, convolution in one domain equals point-wise multiplication in the other domain. Other versions of the convolution theorem are applicable to various Fourier-related transforms.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.
In multivariable calculus, an iterated integral is the result of applying integrals to a function of more than one variable in such a way that each of the integrals considers some of the variables as given constants. For example, the function , if is considered a given parameter, can be integrated with respect to , . The result is a function of and therefore its integral can be considered. If this is done, the result is the iterated integral
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.
In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:
In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Fisical (natur philosofie) interpretation: S any surface, V any volume, etc.. Inkl. variable to time, position, etc.
In mathematics, in the area of complex analysis, Nachbin's theorem is commonly used to establish a bound on the growth rates for an analytic function. This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, given below.
In mathematical analysis, the Schur test, named after German mathematician Issai Schur, is a bound on the operator norm of an integral operator in terms of its Schwartz kernel.
In mathematics, singular integrals are central to harmonic analysis and are intimately connected with the study of partial differential equations. Broadly speaking a singular integral is an integral operator
In mathematics, the class of Muckenhoupt weightsAp consists of those weights ω for which the Hardy–Littlewood maximal operator is bounded on Lp(dω). Specifically, we consider functions f on Rn and their associated maximal functions M( f ) defined as
In mathematics, singular integral operators of convolution type are the singular integral operators that arise on Rn and Tn through convolution by distributions; equivalently they are the singular integral operators that commute with translations. The classical examples in harmonic analysis are the harmonic conjugation operator on the circle, the Hilbert transform on the circle and the real line, the Beurling transform in the complex plane and the Riesz transforms in Euclidean space. The continuity of these operators on L2 is evident because the Fourier transform converts them into multiplication operators. Continuity on Lp spaces was first established by Marcel Riesz. The classical techniques include the use of Poisson integrals, interpolation theory and the Hardy–Littlewood maximal function. For more general operators, fundamental new techniques, introduced by Alberto Calderón and Antoni Zygmund in 1952, were developed by a number of authors to give general criteria for continuity on Lp spaces. This article explains the theory for the classical operators and sketches the subsequent general theory.