In the calculus of variations, a field of mathematical analysis, the functional derivative (or variational derivative)relates a change in a Functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends.
In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments, and their derivatives. In an integral L of a functional, if a function f is varied by adding to it another function δf that is arbitrarily small, and the resulting integrand is expanded in powers of δf, the coefficient of δf in the first order term is called the functional derivative.
For example, consider the functional
where f′(x) ≡df/dx. If f is varied by adding to it a function δf, and the resulting integrand L(x, f +δf, f '+δf′) is expanded in powers of δf, then the change in the value of J to first order in δf can be expressed as follows:
where the variation in the derivative, δf′ was rewritten as the derivative of the variation (δf) ′, and integration by parts was used.
In this section, the functional derivative is defined. Then the functional differential is defined in terms of the functional derivative.
Given a manifold M representing (continuous/smooth) functions ρ (with certain boundary conditions etc.), and a functional F defined as
the functional derivative of F[ρ], denoted δF/δρ, is defined through
where is an arbitrary function. The quantity is called the variation of ρ.
In other words,
is a linear functional, so one may apply the Riesz–Markov–Kakutani representation theorem to represent this functional as integration against some measure. Then δF/δρ is defined to be the Radon–Nikodym derivative of this measure.
One thinks of the function δF/δρ as the gradient of F at the point ρ and
as the directional derivative at point ρ in the direction of ϕ. Then analogous to vector calculus, the inner product with the gradient gives the directional derivative.
The differential (or variation or first variation) of the functional is
Heuristically, is the change in , so we 'formally' have , and then this is similar in form to the total differential of a function ,
where are independent variables. Comparing the last two equations, the functional derivative has a role similar to that of the partial derivative , where the variable of integration is like a continuous version of the summation index .
Like the derivative of a function, the functional derivative satisfies the following properties, where F[ρ] and G[ρ] are functionals:
where λ, μ are constants.
A formula to determine functional derivatives for a common class of functionals can be written as the integral of a function and its derivatives. This is a generalization of the Euler–Lagrange equation: indeed, the functional derivative was introduced in physics within the derivation of the Lagrange equation of the second kind from the principle of least action in Lagrangian mechanics (18th century). The first three examples below are taken from density functional theory (20th century), the fourth from statistical mechanics (19th century).
Given a functional
and a function ϕ(r) that vanishes on the boundary of the region of integration, from a previous section Definition,
The second line is obtained using the total derivative, where ∂f /∂∇ρ is a derivative of a scalar with respect to a vector.
The third line was obtained by use of a product rule for divergence. The fourth line was obtained using the divergence theorem and the condition that ϕ=0 on the boundary of the region of integration. Since ϕ is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is
where ρ = ρ(r) and f = f (r, ρ, ∇ρ). This formula is for the case of the functional form given by F[ρ] at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional.)
The above equation for the functional derivative can be generalized to the case that includes higher dimensions and higher order derivatives. The functional would be,
where the vector r∈ ℝn, and ∇(i) is a tensor whose ni components are partial derivative operators of order i,
An analogous application of the definition of the functional derivative yields
In the last two equations, the ni components of the tensor are partial derivatives of f with respect to partial derivatives of ρ,
and the tensor scalar product is,
The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure:
Since the integrand of TTF[ρ] does not involve derivatives of ρ(r), the functional derivative of TTF[ρ] is,
For the electron-nucleus potential, Thomas and Fermi employed the Coulomb potential energy functional
Applying the definition of functional derivative,
For the classical part of the electron-electron interaction, Thomas and Fermi employed the Coulomb potential energy functional
From the definition of the functional derivative,
The first and second terms on the right hand side of the last equation are equal, since r and r′ in the second term can be interchanged without changing the value of the integral. Therefore,
and the functional derivative of the electron-electron coulomb potential energy functional J[ρ] is,
The second functional derivative is
In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it suit better a molecular electron cloud:
Using a previously derived formula for the functional derivative,
and the result is,
The entropy of a discrete random variable is a functional of the probability mass function.
Using the delta function as a test function,
This is particularly useful in calculating the correlation functions from the partition function in quantum field theory.
A function can be written in the form of an integral like a functional. For example,
Since the integrand does not depend on derivatives of ρ, the functional derivative of ρ(r) is,
The functional derivative of the iterated function is given by:
Putting in N=0 gives:
In physics, it is common to use the Dirac delta function in place of a generic test function , for yielding the functional derivative at the point (this is a point of the whole functional derivative as a partial derivative is a component of the gradient):
This works in cases when formally can be expanded as a series (or at least up to first order) in . The formula is however not mathematically rigorous, since is usually not even defined.
The definition given in a previous section is based on a relationship that holds for all test functions , so one might think that it should hold also when is chosen to be a specific function such as the delta function. However, the latter is not a valid test function (it is not even a proper function).
In the definition, the functional derivative describes how the functional changes as a result of a small change in the entire function . The particular form of the change in is not specified, but it should stretch over the whole interval on which is defined. Employing the particular form of the perturbation given by the delta function has the meaning that is varied only in the point . Except for this point, there is no variation in .
In mathematics, the Dirac delta function is a generalized function or distribution, a function on the space of test functions. It was introduced by physicist Paul Dirac. It is called a function, although it is not a function R → C.
In physics, the Navier–Stokes equations are certain partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel Stokes.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf(p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f(p).
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.
Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson.
Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In physics, chemistry and biology, a potential gradient is the local rate of change of the potential with respect to displacement, i.e. spatial derivative, or gradient. This quantity frequently occurs in equations of physical processes because it leads to some form of flux.
In mathematics, the directional derivative of a multivariate differentiable function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely.
A classical field theory is a physical theory that predicts how one or more physical fields interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically meant to describe electromagnetism and gravitation, two of the fundamental forces of nature.
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue.
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form
In electromagnetism and applications, an inhomogeneous electromagnetic wave equation, or nonhomogeneous electromagnetic wave equation, is one of a set of wave equations describing the propagation of electromagnetic waves generated by nonzero source charges and currents. The source terms in the wave equations make the partial differential equations inhomogeneous, if the source terms are zero the equations reduce to the homogeneous electromagnetic wave equations. The equations follow from Maxwell's equations.
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
The derivatives of scalars, vectors, and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics. These derivatives are used in the theories of nonlinear elasticity and plasticity, particularly in the design of algorithms for numerical simulations.
In fluid dynamics, Luke's variational principle is a Lagrangian variational description of the motion of surface waves on a fluid with a free surface, under the action of gravity. This principle is named after J.C. Luke, who published it in 1967. This variational principle is for incompressible and inviscid potential flows, and is used to derive approximate wave models like the mild-slope equation, or using the averaged Lagrangian approach for wave propagation in inhomogeneous media.
In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
|volume=has extra text (help).