Part of a series of articles about |
Calculus |
---|
In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus [1] [2] [3] [4] [5] [6] [7] states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz.
For a vector field defined on a domain , a Helmholtz decomposition is a pair of vector fields and such that: Here, is a scalar potential, is its gradient, and is the divergence of the vector field . The irrotational vector field is called a gradient field and is called a solenoidal field or rotation field. This decomposition does not exist for all vector fields and is not unique. [8]
The Helmholtz decomposition in three dimensions was first described in 1849 [9] by George Gabriel Stokes for a theory of diffraction. Hermann von Helmholtz published his paper on some hydrodynamic basic equations in 1858, [10] [11] which was part of his research on the Helmholtz's theorems describing the motion of fluid in the vicinity of vortex lines. [11] Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions. [8] [12] [13] For Riemannian manifolds, the Helmholtz-Hodge decomposition using differential geometry and tensor calculus was derived. [8] [11] [14] [15]
The decomposition has become an important tool for many problems in theoretical physics, [11] [14] but has also found applications in animation, computer vision as well as robotics. [15]
Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump functions that are defined on a bounded domain. Then, a vector potential can be defined, such that the rotation field is given by , using the curl of a vector field. [16]
Let be a vector field on a bounded domain , which is twice continuously differentiable inside , and let be the surface that encloses the domain with outward surface normal . Then can be decomposed into a curl-free component and a divergence-free component as follows: [17]
where
and is the nabla operator with respect to , not .
If and is therefore unbounded, and vanishes faster than as , then one has [18]
This holds in particular if is twice continuously differentiable in and of bounded support.
Suppose we have a vector function of which we know the curl, , and the divergence, , in the domain and the fields on the boundary. Writing the function using the delta function in the form where is the Laplacian operator, we have
Now, changing the meaning of to the vector Laplacian operator (we have the right to do so because this laplacian is with respect to therefore it sees the vector field as a constant), we can move to the right of theoperator.
where we have used the vector Laplacian identity:
differentiation/integration with respect to by and in the last line, linearity of function arguments:
Then using the vectorial identities
we get
Thanks to the divergence theorem the equation can be rewritten as
with outward surface normal .
Defining
we finally obtain
If is a Helmholtz decomposition of , then is another decomposition if, and only if,
Proof: Set and . According to the definition of the Helmholtz decomposition, the condition is equivalent to
Taking the divergence of each member of this equation yields , hence is harmonic.
Conversely, given any harmonic function , is solenoidal since
Thus, according to the above section, there exists a vector field such that .
If is another such vector field, then fulfills , hence for some scalar field .
The term "Helmholtz theorem" can also refer to the following. Let C be a solenoidal vector field and d a scalar field on R3 which are sufficiently smooth and which vanish faster than 1/r2 at infinity. Then there exists a vector field F such that
if additionally the vector field F vanishes as r → ∞, then F is unique. [18]
In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type. [18] The proof is by a construction generalizing the one given above: we set
where represents the Newtonian potential operator. (When acting on a vector field, such as ∇ × F, it is defined to act on each component.)
The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose Ω is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field u ∈ (L2(Ω))3 has an orthogonal decomposition: [19] [20] [21]
where φ is in the Sobolev space H1(Ω) of square-integrable functions on Ω whose partial derivatives defined in the distribution sense are square integrable, and A ∈ H(curl, Ω), the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl.
For a slightly smoother vector field u ∈ H(curl, Ω), a similar decomposition holds:
where φ ∈ H1(Ω), v ∈ (H1(Ω))d.
Note that in the theorem stated here, we have imposed the condition that if is not defined on a bounded domain, then shall decay faster than . Thus, the Fourier transform of , denoted as , is guaranteed to exist. We apply the convention
The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension.
Now consider the following scalar and vector fields:
Hence
A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component. [22] This terminology comes from the following construction: Compute the three-dimensional Fourier transform of the vector field . Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have
Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive:
Since and ,
we can get
so this is indeed the Helmholtz decomposition. [23]
The generalization to dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions.
Let be a vector field on a bounded domain which decays faster than for and .
The scalar potential is defined similar to the three dimensional case as: where as the integration kernel is again the fundamental solution of Laplace's equation, but in d-dimensional space: with the volume of the d-dimensional unit balls and the gamma function.
For , is just equal to , yielding the same prefactor as above. The rotational potential is an antisymmetric matrix with the elements: Above the diagonal are entries which occur again mirrored at the diagonal, but with a negative sign. In the three-dimensional case, the matrix elements just correspond to the components of the vector potential . However, such a matrix potential can be written as a vector only in the three-dimensional case, because is valid only for .
As in the three-dimensional case, the gradient field is defined as The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix: In three-dimensional space, this is equivalent to the rotation of the vector potential. [8] [24]
In a -dimensional vector space with , can be replaced by the appropriate Green's function for the Laplacian, defined by where Einstein summation convention is used for the index . For example, in 2D.
Following the same steps as above, we can write where is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol , which is valid in dimensions, where is a -component multi-index. This gives
We can therefore write where Note that the vector potential is replaced by a rank- tensor in dimensions.
Because is a function of only , one can replace , giving Integration by parts can then be used to give where is the boundary of . These expressions are analogous to those given above for three-dimensional space.
For a further generalization to manifolds, see the discussion of Hodge decomposition below.
The Hodge decomposition is closely related to the Helmholtz decomposition, [25] generalizing from vector fields on R3 to differential forms on a Riemannian manifold M. Most formulations of the Hodge decomposition require M to be compact. [26] Since this is not true of R3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem.
Most textbooks only deal with vector fields decaying faster than with at infinity. [16] [13] [27] However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than with , which is substantially less strict. To achieve this, the kernel in the convolution integrals has to be replaced by . [28] With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial. [12] [13] [24] [29]
For all analytic vector fields that need not go to zero even at infinity, methods based on partial integration and the Cauchy formula for repeated integration [30] can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case of multivariate polynomial, sine, cosine, and exponential functions. [8]
In general, the Helmholtz decomposition is not uniquely defined. A harmonic function is a function that satisfies . By adding to the scalar potential , a different Helmholtz decomposition can be obtained:
For vector fields , decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity. Because is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields. [31]
This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition.
The Helmholtz theorem is of particular interest in electrodynamics, since it can be used to write Maxwell's equations in the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, given electric current density and charge density, the electric field and the magnetic flux density can be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials. [16]
In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator is called the Stokes operator. [32]
In the theory of dynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to compute Lyapunov functions in some cases. [33] [34] [35]
For some dynamical systems such as the Lorenz system (Edward N. Lorenz, 1963 [36] ), a simplified model for atmospheric convection, a closed-form expression of the Helmholtz decomposition can be obtained: The Helmholtz decomposition of , with the scalar potential is given as:
The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect. [8] [37]
In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free). [38] In this way, the complex shear modulus can be calculated without contributions from compression waves.
The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields. [15] [39]
In physics, specifically in electromagnetism, the Lorentz force law is the combination of electric and magnetic force on a point charge due to electromagnetic fields. The Lorentz force, on the other hand, is a physical effect that occurs in the vicinity of electrically neutral, current-carrying conductors causing moving electrical charges to experience a magnetic force.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis.
In physics, Gauss's law, also known as Gauss's flux theorem, is one of Maxwell's equations. It is an application of the divergence theorem, and it relates the distribution of electric charge to the resulting electric field.
In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.
Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
In fluid dynamics, Stokes' law gives the frictional force – also called drag force – exerted on spherical objects moving at very small Reynolds numbers in a viscous fluid. It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.
In mathematical physics, scalar potential describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.
In classical electromagnetism, magnetic vector potential is the vector quantity defined so that its curl is equal to the magnetic field: . Together with the electric potential φ, the magnetic vector potential can be used to specify the electric field E as well. Therefore, many equations of electromagnetism can be written either in terms of the fields E and B, or equivalently in terms of the potentials φ and A. In more advanced theories such as quantum mechanics, most equations use potentials rather than fields.
The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field E or the magnetic field B, takes the form:
The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space rather than just the real line.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
In electrodynamics, the retarded potentials are the electromagnetic potentials for the electromagnetic field generated by time-varying electric current or charge distributions in the past. The fields propagate at the speed of light c, so the delay of the fields connecting cause and effect at earlier and later times is an important factor: the signal takes a finite time to propagate from a point in the charge or current distribution to another point in space, see figure below.
In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.
The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
The Leray projection, named after Jean Leray, is a linear operator used in the theory of partial differential equations, specifically in the fields of fluid dynamics. Informally, it can be seen as the projection on the divergence-free vector fields. It is used in particular to eliminate both the pressure term and the divergence-free term in the Stokes equations and Navier–Stokes equations.