This section needs additional citations for verification .(October 2012) |
Part of a series of articles about |
Calculus |
---|
In multivariable calculus, the directional derivative measures the rate at which a function changes in a particular direction at a given point.[ citation needed ]
The directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a direction specified by v.
The directional derivative of a scalar function f with respect to a vector v at a point (e.g., position) x may be denoted by any of the following:
It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant. The directional derivative is a special case of the Gateaux derivative.
The directional derivative of a scalar function along a vector is the function defined by the limit [1]
This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined. [2]
If the function f is differentiable at x, then the directional derivative exists along any unit vector v at x, and one has
where the on the right denotes the gradient , is the dot product and v is a unit vector. [3] This follows from defining a path and using the definition of the derivative as a limit which can be calculated along this path to get:
Intuitively, the directional derivative of f at a point x represents the rate of change of f, in the direction of v.
In a Euclidean space, some authors [4] define the directional derivative to be with respect to an arbitrary nonzero vector v after normalization, thus being independent of its magnitude and depending only on its direction. [5]
This definition gives the rate of increase of f per unit of distance moved in the direction given by v. In this case, one has or in case f is differentiable at x,
In the context of a function on a Euclidean space, some texts restrict the vector v to being a unit vector. With this restriction, both the above definitions are equivalent. [6]
Many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, and differentiable at, p:
Let M be a differentiable manifold and p a point of M. Suppose that f is a function defined in a neighborhood of p, and differentiable at p. If v is a tangent vector to M at p, then the directional derivative of f along v, denoted variously as df(v) (see Exterior derivative), (see Covariant derivative), (see Lie derivative), or (see Tangent space § Definition via derivations), can be defined as follows. Let γ : [−1, 1] → M be a differentiable curve with γ(0) = p and γ′(0) = v. Then the directional derivative is defined by This definition can be proven independent of the choice of γ, provided γ is selected in the prescribed manner so that γ(0) = p and γ′(0) = v.
The Lie derivative of a vector field along a vector field is given by the difference of two directional derivatives (with vanishing torsion): In particular, for a scalar field , the Lie derivative reduces to the standard directional derivative:
Directional derivatives are often used in introductory derivations of the Riemann curvature tensor. Consider a curved rectangle with an infinitesimal vector along one edge and along the other. We translate a covector along then and then subtract the translation along and then . Instead of building the directional derivative using partial derivatives, we use the covariant derivative. The translation operator for is thus and for , The difference between the two paths is then It can be argued [7] that the noncommutativity of the covariant derivatives measures the curvature of the manifold: where is the Riemann curvature tensor and the sign depends on the sign convention of the author.
In the Poincaré algebra, we can define an infinitesimal translation operator P as (the i ensures that P is a self-adjoint operator) For a finite displacement λ, the unitary Hilbert space representation for translations is [8] By using the above definition of the infinitesimal translation operator, we see that the finite translation operator is an exponentiated directional derivative: This is a translation operator in the sense that it acts on multivariable functions f(x) as
In standard single-variable calculus, the derivative of a smooth function f(x) is defined by (for small ε) This can be rearranged to find f(x+ε): It follows that is a translation operator. This is instantly generalized [9] to multivariable functions f(x) Here is the directional derivative along the infinitesimal displacement ε. We have found the infinitesimal version of the translation operator: It is evident that the group multiplication law [10] U(g)U(f)=U(gf) takes the form So suppose that we take the finite displacement λ and divide it into N parts (N→∞ is implied everywhere), so that λ/N=ε. In other words, Then by applying U(ε) N times, we can construct U(λ): We can now plug in our above expression for U(ε): Using the identity [11] we have And since U(ε)f(x) = f(x+ε) we have Q.E.D.
As a technical note, this procedure is only possible because the translation group forms an Abelian subgroup (Cartan subalgebra) in the Poincaré algebra. In particular, the group multiplication law U(a)U(b) = U(a+b) should not be taken for granted. We also note that Poincaré is a connected Lie group. It is a group of transformations T(ξ) that are described by a continuous set of real parameters . The group multiplication law takes the form Taking as the coordinates of the identity, we must have The actual operators on the Hilbert space are represented by unitary operators U(T(ξ)). In the above notation we suppressed the T; we now write U(λ) as U(P(λ)). For a small neighborhood around the identity, the power series representation is quite good. Suppose that U(T(ξ)) form a non-projective representation, i.e., The expansion of f to second power is After expanding the representation multiplication equation and equating coefficients, we have the nontrivial condition Since is by definition symmetric in its indices, we have the standard Lie algebra commutator: with C the structure constant. The generators for translations are partial derivative operators, which commute: This implies that the structure constants vanish and thus the quadratic coefficients in the f expansion vanish as well. This means that f is simply additive: and thus for abelian groups, Q.E.D.
The rotation operator also contains a directional derivative. The rotation operator for an angle θ, i.e. by an amount θ = |θ| about an axis parallel to is Here L is the vector operator that generates SO(3): It may be shown geometrically that an infinitesimal right-handed rotation changes the position vector x by So we would expect under infinitesimal rotation: It follows that Following the same exponentiation procedure as above, we arrive at the rotation operator in the position basis, which is an exponentiated directional derivative: [12]
A normal derivative is a directional derivative taken in the direction normal (that is, orthogonal) to some surface in space, or more generally along a normal vector field orthogonal to some hypersurface. See for example Neumann boundary condition. If the normal direction is denoted by , then the normal derivative of a function f is sometimes denoted as . In other notations,
Several important results in continuum mechanics require the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. [13] The directional directive provides a systematic way of finding these derivatives.
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
Let f(v) be a real valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the vector defined through its dot product with any vector u being
for all vectors u. The above dot product yields a scalar, and if u is a unit vector gives the directional derivative of f at v, in the u direction.
Properties:
Let f(v) be a vector valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the second order tensor defined through its dot product with any vector u being
for all vectors u. The above dot product yields a vector, and if u is a unit vector gives the direction derivative of f at v, in the directional u.
Properties:
Let be a real valued function of the second order tensor . Then the derivative of with respect to (or at ) in the direction is the second order tensor defined as for all second order tensors .
Properties:
Let be a second order tensor valued function of the second order tensor . Then the derivative of with respect to (or at ) in the direction is the fourth order tensor defined as for all second order tensors .
Properties:
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.
In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem. In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem.
In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential can be expressed as plane waves modulated by periodic functions. The theorem is named after the Swiss physicist Felix Bloch, who discovered the theorem in 1929. Mathematically, they are written
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates (q, p) → that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations and Liouville's theorem.
In mathematics, Green's identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green, who discovered Green's theorem.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
The Maxwell stress tensor is a symmetric second-order tensor in three dimensions that is used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impractically difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
The derivatives of scalars, vectors, and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics. These derivatives are used in the theories of nonlinear elasticity and plasticity, particularly in the design of algorithms for numerical simulations.
In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.
In geotechnical engineering, rock mass plasticity is the study of the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture, while plasticity is identified with ductile materials such as metals. In field-scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word.
Media related to Directional derivative at Wikimedia Commons