Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions. Functional integrals arise in probability, in the study of partial differential equations, and in the path integral approach to the quantum mechanics of particles and fields.
In an ordinary integral (in the sense of Lebesgue integration) there is a function to be integrated (the integrand) and a region of space over which to integrate the function (the domain of integration). The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region, the value of the integrand cannot vary much, so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function, the integrand returns a value to add up. Making this procedure rigorous poses challenges that continue to be topics of current research.
Functional integration was developed by Percy John Daniell in an article of 1919 [1] and Norbert Wiener in a series of studies culminating in his articles of 1921 on Brownian motion. They developed a rigorous method (now known as the Wiener measure) for assigning a probability to a particle's random path. Richard Feynman developed another functional integral, the path integral, useful for computing the quantum properties of systems. In Feynman's path integral, the classical notion of a unique trajectory for a particle is replaced by an infinite sum of classical paths, each weighted differently according to its classical properties.
Functional integration is central to quantization techniques in theoretical physics. The algebraic properties of functional integrals are used to develop series used to calculate properties in quantum electrodynamics and the standard model of particle physics.
This section may be confusing or unclear to readers.(January 2014) |
Whereas standard Riemann integration sums a function f(x) over a continuous range of values of x, functional integration sums a functional G[f], which can be thought of as a "function of a function" over a continuous range (or space) of functions f. Most functional integrals cannot be evaluated exactly but must be evaluated using perturbation methods. The formal definition of a functional integral is
However, in most cases the functions f(x) can be written in terms of an infinite series of orthogonal functions such as , and then the definition becomes
which is slightly more understandable. The integral is shown to be a functional integral with a capital . Sometimes the argument is written in square brackets , to indicate the functional dependence of the function in the functional integration measure.
Most functional integrals are actually infinite, but often the limit of the quotient of two related functional integrals can still be finite. The functional integrals that can be evaluated exactly usually start with the following Gaussian integral:
in which . By functionally differentiating this with respect to J(x) and then setting to 0 this becomes an exponential multiplied by a monomial in f. To see this, let's use the following notation:
With this notation the first equation can be written as:
Now, taking functional derivatives to the definition of and then evaluating in , one obtains:
which is the result anticipated. More over, by using the first equation one arrives to the useful result:
Putting these results together and backing to the original notation we have:
Another useful integral is the functional delta function:
which is useful to specify constraints. Functional integrals can also be done over Grassmann-valued functions , where , which is useful in quantum electrodynamics for calculations involving fermions.
This section needs expansion. You can help by adding to it. (October 2009) |
Functional integrals where the space of integration consists of paths (ν = 1) can be defined in many different ways. The definitions fall in two different classes: the constructions derived from Wiener's theory yield an integral based on a measure, whereas the constructions following Feynman's path integral do not. Even within these two broad divisions, the integrals are not identical, that is, they are defined differently for different classes of functions.
In the Wiener integral, a probability is assigned to a class of Brownian motion paths. The class consists of the paths w that are known to go through a small region of space at a given time. The passage through different regions of space is assumed independent of each other, and the distance between any two points of the Brownian path is assumed to be Gaussian-distributed with a variance that depends on the time t and on a diffusion constant D:
The probability for the class of paths can be found by multiplying the probabilities of starting in one region and then being at the next. The Wiener measure can be developed by considering the limit of many small regions.
In mathematical physics, the Dirac delta distribution, also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function where U is an open subset of that satisfies Laplace's equation, that is,
In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
In quantum field theory, partition functions are generating functionals for correlation functions, making them key objects of study in the path integral formalism. They are the imaginary time versions of statistical mechanics partition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach is usually implemented, this being equivalent to summing over Feynman diagrams.
The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. These may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.
In theoretical physics, Euclidean quantum gravity is a version of quantum gravity. It seeks to use the Wick rotation to describe the force of gravity according to the principles of quantum mechanics.
In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology (named after Pierre Dolbeault) is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).
The isothermal–isobaric ensemble is a statistical mechanical ensemble that maintains constant temperature and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. This ensemble plays an important role in chemistry as chemical reactions are usually carried out under constant pressure condition. The NPT ensemble is also useful for measuring the equation of state of model systems whose virial expansion for pressure cannot be evaluated, or systems near first-order phase transitions.
In theoretical physics, background field method is a useful procedure to calculate the effective action of a quantum field theory by expanding a quantum field around a classical "background" value B:
A product integral is any product-based counterpart of the usual sum-based integral of calculus. The first product integral was developed by the mathematician Vito Volterra in 1887 to solve systems of linear differential equations. Other examples of product integrals are the geometric integral, the bigeometric integral, and some other integrals of non-Newtonian calculus.
In mathematical physics, the Berezin integral, named after Felix Berezin,, is a way to define integration for functions of Grassmann variables. It is not an integral in the Lebesgue sense; the word "integral" is used because the Berezin integral has properties analogous to the Lebesgue integral and because it extends the path integral in physics, where it is used as a sum over histories for fermions.
In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.
Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.
Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas. IFT summarizes the information available on a physical field using Bayesian probabilities. It uses computational techniques developed for quantum field theory and statistical field theory to handle the infinite number of degrees of freedom of a field and to derive algorithms for the calculation of field expectation values. For example, the posterior expectation value of a field generated by a known Gaussian process and measured by a linear device with known Gaussian noise statistics is given by a generalized Wiener filter applied to the measured data. IFT extends such known filter formula to situations with nonlinear physics, nonlinear devices, non-Gaussian field or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it uses Feynman diagrams, renormalisation flow equations, and other methods from mathematical physics.
Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity and coercivity assumption with respect to supremum norm.