Part of a series of articles about |
Calculus |
---|
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.
In physical terms, the divergence of a vector field is the extent to which the vector field flux behaves like a source or a sink at a given point. It is a local measure of its "outgoingness" – the extent to which there are more of the field vectors exiting from an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence.
The divergence of a vector field is often illustrated using the simple example of the velocity field of a fluid, a liquid or gas. A moving gas has a velocity, a speed and direction at each point, which can be represented by a vector, so the velocity of the gas forms a vector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore, the velocity field has negative divergence everywhere. In contrast, in a gas at a constant temperature and pressure, the net flux of gas out of any closed surface is zero. The gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so the net flux is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is called solenoidal.
If the gas is heated only at one point or small region, or a small tube is introduced which supplies a source of additional gas at one point, the gas there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the gas, centered on the heated point. Any closed surface enclosing the heated point will have a flux of gas particles passing out of it, so there is positive divergence at that point. However any closed surface not enclosing the point will have a constant density of gas inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore, the divergence at any other point is zero.
The divergence of a vector field F(x) at a point x0 is defined as the limit of the ratio of the surface integral of F out of the closed surface of a volume V enclosing x0 to the volume of V, as V shrinks to zero
where |V| is the volume of V, S(V) is the boundary of V, and is the outward unit normal to that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that contain x0 and approach zero volume. The result, div F, is a scalar function of x.
Since this definition is coordinate-free, it shows that the divergence is the same in any coordinate system. However it is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use.
A vector field with zero divergence everywhere is called solenoidal – in which case any closed surface has no net flux across it.
In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field is defined as the scalar-valued function:
Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an N-dimensional vector field F in N-dimensional space is invariant under any invertible linear transformation[ clarification needed ].
The common notation for the divergence ∇ · F is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the ∇ operator (see del), apply them to the corresponding components of F, and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
For a vector expressed in local unit cylindrical coordinates as
where ea is the unit vector in direction a, the divergence is [1]
The use of local coordinates is vital for the validity of the expression. If we consider x the position vector and the functions r(x), θ(x), and z(x), which assign the corresponding global cylindrical coordinate to a vector, in general , , and . In particular, if we consider the identity function F(x) = x, we find that:
In spherical coordinates, with θ the angle with the z axis and φ the rotation around the z axis, and F again written in local unit coordinates, the divergence is [2]
Let A be continuously differentiable second-order tensor field defined as follows:
the divergence in cartesian coordinate system is a first-order tensor field [3] and can be defined in two ways: [4]
We have
If tensor is symmetric Aij = Aji then . Because of this, often in the literature the two definitions (and symbols div and ) are used interchangeably (especially in mechanics equations where tensor symmetry is assumed).
Expressions of in cylindrical and spherical coordinates are given in the article del in cylindrical and spherical coordinates.
Using Einstein notation we can consider the divergence in general coordinates, which we write as x1, …, xi, …, xn, where n is the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, so x2 refers to the second component, and not the quantity x squared. The index variable i is used to refer to an arbitrary component, such as xi. The divergence can then be written via the Voss-Weyl formula, [8] as:
where is the local coefficient of the volume element and Fi are the components of with respect to the local unnormalized covariant basis (sometimes written as ). The Einstein notation implies summation over i, since it appears as both an upper and lower index.
The volume coefficient ρ is a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we have ρ = 1, ρ = r and ρ = r2 sin θ, respectively. The volume can also be expressed as , where gab is the metric tensor. The determinant appears because it provides the appropriate invariant definition of the volume, given a set of vectors. Since the determinant is a scalar quantity which doesn't depend on the indices, these can be suppressed, writing . The absolute value is taken in order to handle the general case where the determinant might be negative, such as in pseudo-Riemannian spaces. The reason for the square-root is a bit subtle: it effectively avoids double-counting as one goes from curved to Cartesian coordinates, and back. The volume (the determinant) can also be understood as the Jacobian of the transformation from Cartesian to curvilinear coordinates, which for n = 3 gives .
Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we write for the normalized basis, and for the components of F with respect to it, we have that
using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant element , we can conclude that . After substituting, the formula becomes:
See § In curvilinear coordinates for further discussion.
The following properties can all be derived from the ordinary differentiation rules of calculus. Most importantly, the divergence is a linear operator, i.e.,
for all vector fields F and G and all real numbers a and b.
There is a product rule of the following type: if φ is a scalar-valued function and F is a vector field, then
or in more suggestive notation
Another product rule for the cross product of two vector fields F and G in three dimensions involves the curl and reads as follows:
or
The Laplacian of a scalar field is the divergence of the field's gradient:
The divergence of the curl of any vector field (in three dimensions) is equal to zero:
If a vector field F with zero divergence is defined on a ball in R3, then there exists some vector field G on the ball with F = curl G. For regions in R3 more topologically complicated than this, the latter statement might be false (see Poincaré lemma). The degree of failure of the truth of the statement, measured by the homology of the chain complex
serves as a nice quantification of the complicatedness of the underlying region U. These are the beginnings and main motivations of de Rham cohomology.
It can be shown that any stationary flux v(r) that is twice continuously differentiable in R3 and vanishes sufficiently fast for |r| → ∞ can be decomposed uniquely into an irrotational partE(r) and a source-free partB(r). Moreover, these parts are explicitly determined by the respective source densities (see above) and circulation densities (see the article Curl):
For the irrotational part one has
with
The source-free part, B, can be similarly written: one only has to replace the scalar potentialΦ(r) by a vector potentialA(r) and the terms −∇Φ by +∇ × A, and the source density div v by the circulation density ∇ × v.
This "decomposition theorem" is a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition, which works in dimensions greater than three as well.
The divergence of a vector field can be defined in any finite number of dimensions. If
in a Euclidean coordinate system with coordinates x1, x2, ..., xn, define
In the 1D case, F reduces to a regular function, and the divergence reduces to the derivative.
For any n, the divergence is a linear operator, and it satisfies the "product rule"
for any scalar-valued function φ.
One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in R3. Define the current two-form as
It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density ρ = 1 dx ∧ dy ∧ dz moving with local velocity F. Its exterior derivative dj is then given by
where is the wedge product.
Thus, the divergence of the vector field F can be expressed as:
Here the superscript ♭ is one of the two musical isomorphisms, and ⋆ is the Hodge star operator. When the divergence is written in this way, the operator is referred to as the codifferential. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system.
The appropriate expression is more complicated in curvilinear coordinates. The divergence of a vector field extends naturally to any differentiable manifold of dimension n that has a volume form (or density) μ, e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two-form for a vector field on R3, on such a manifold a vector field X defines an (n − 1)-form j = iX μ obtained by contracting X with μ. The divergence is then the function defined by
The divergence can be defined in terms of the Lie derivative as
This means that the divergence measures the rate of expansion of a unit of volume (a volume element) as it flows with the vector field.
On a pseudo-Riemannian manifold, the divergence with respect to the volume can be expressed in terms of the Levi-Civita connection ∇:
where the second expression is the contraction of the vector field valued 1-form ∇X with itself and the last expression is the traditional coordinate expression from Ricci calculus.
An equivalent expression without using a connection is
where g is the metric and denotes the partial derivative with respect to coordinate xa. The square-root of the (absolute value of the determinant of the) metric appears because the divergence must be written with the correct conception of the volume. In curvilinear coordinates, the basis vectors are no longer orthonormal; the determinant encodes the correct idea of volume in this case. It appears twice, here, once, so that the can be transformed into "flat space" (where coordinates are actually orthonormal), and once again so that is also transformed into "flat space", so that finally, the "ordinary" divergence can be written with the "ordinary" concept of volume in flat space (i.e. unit volume, i.e. one, i.e. not written down). The square-root appears in the denominator, because the derivative transforms in the opposite way (contravariantly) to the vector (which is covariant). This idea of getting to a "flat coordinate system" where local computations can be done in a conventional way is called a vielbein. A different way to see this is to note that the divergence is the codifferential in disguise. That is, the divergence corresponds to the expression with the differential and the Hodge star. The Hodge star, by its construction, causes the volume form to appear in all of the right places.
Divergence can also be generalised to tensors. In Einstein notation, the divergence of a contravariant vector Fμ is given by
where ∇μ denotes the covariant derivative. In this general setting, the correct formulation of the divergence is to recognize that it is a codifferential; the appropriate properties follow from there.
Equivalently, some authors define the divergence of a mixed tensor by using the musical isomorphism ♯: if T is a (p, q)-tensor (p for the contravariant vector and q for the covariant one), then we define the divergence of T to be the (p, q − 1)-tensor
that is, we take the trace over the first two covariant indices of the covariant derivative. [a] The symbol refers to the musical isomorphism.
In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a given point in space is specified by three real numbers: the radial distancer along the radial line connecting the point to the fixed point of origin; the polar angleθ between the radial line and a given polar axis; and the azimuthal angleφ as the angle of rotation of the radial line around the polar axis. (See graphic regarding the "physics convention".) Once the radius is fixed, the three coordinates (r, θ, φ), known as a 3-tuple, provide a coordinate system on a sphere, typically called the spherical polar coordinates. The plane passing through the origin and perpendicular to the polar axis (where the polar angle is a right angle) is called the reference plane (sometimes fundamental plane).
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature. They are named after Carl Gustav Jacob Jacobi.
In fluid dynamics, Stokes' law gives the frictional force – also called drag force – exerted on spherical objects moving at very small Reynolds numbers in a viscous fluid. It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.
In mathematical physics, scalar potential describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.
In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group O(p, q). As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols.
The following are important identities involving derivatives and integrals in vector calculus.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.
In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.
Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.