Gradient

Last updated
The gradient, represented by the blue arrows, denote the direction of greatest change of a scalar function. The values of the function are represented in greyscale and increase in value from white (low) to dark (high). Gradient2.svg
The gradient, represented by the blue arrows, denote the direction of greatest change of a scalar function. The values of the function are represented in greyscale and increase in value from white (low) to dark (high).

In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) whose value at a point is the vector [lower-alpha 1] whose components are the partial derivatives of at . [1] [2] [3] [4] [5] [6] [7] [8] [9] That is, for , its gradient is defined at the point in n-dimensional space as the vector: [lower-alpha 2]

Contents

The nabla symbol , written as an upside-down triangle and pronounced "del", denotes the vector differential operator.

The gradient is dual to the total derivative : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear function on vectors. [lower-alpha 3] They are related in that the dot product of the gradient of f at a point p with another tangent vector v equals the directional derivative of f at p of the function along v; that is, .

The gradient vector can be interpreted as the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. [10] [11] [12] [13] [14] [15] [16] Further, the gradient is the zero vector at a point if and only if it is a stationary point (where the derivative vanishes). The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent.

The gradient admits multiple generalizations to more general functions on manifolds; see § Generalizations.

Motivation

Gradient of the 2D function f(x, y) = xe is plotted as blue arrows over the pseudocolor plot of the function. Gradient of a Function.tif
Gradient of the 2D function f(x, y) = xe is plotted as blue arrows over the pseudocolor plot of the function.

Consider a room where the temperature is given by a scalar field, T, so at each point (x, y, z) the temperature is T(x, y, z), independent of time. At each point in the room, the gradient of T at that point will show the direction in which the temperature rises most quickly, moving away from (x, y, z). The magnitude of the gradient will determine how fast the temperature rises in that direction.

Consider a surface whose height above sea level at point (x, y) is H(x, y). The gradient of H at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.

The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, namely 40% times the cosine of 60°, or 20%.

More generally, if the hill height function H is differentiable, then the gradient of H dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of H along the unit vector.

Notation

The gradient of a function at point is usually written as . It may also be denoted by any of the following:

Definition

The gradient of the function f(x,y) = -(cosx + cosy) depicted as a projected vector field on the bottom plane. 3d-gradient-cos.svg
The gradient of the function f(x,y) = −(cosx + cosy) depicted as a projected vector field on the bottom plane.

The gradient (or gradient vector field) of a scalar function f(x1, x2, x3, …, xn) is denoted f or f where (nabla) denotes the vector differential operator, del. The notation grad f is also commonly used to represent the gradient. The gradient of f is defined as the unique vector field whose dot product with any vector v at each point x is the directional derivative of f along v. That is,

Formally, the gradient is dual to the derivative; see relationship with derivative.

When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient).

The magnitude and direction of the gradient vector are independent of the particular coordinate representation. [17] [18]

Cartesian coordinates

In the three-dimensional Cartesian coordinate system with a Euclidean metric, the gradient, if it exists, is given by:

where i, j, k are the standard unit vectors in the directions of the x, y and z coordinates, respectively. For example, the gradient of the function

is

In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.

Cylindrical and spherical coordinates

In cylindrical coordinates with a Euclidean metric, the gradient is given by: [19]

where ρ is the axial distance, φ is the azimuthal or azimuth angle, z is the axial coordinate, and eρ, eφ and ez are unit vectors pointing along the coordinate directions.

In spherical coordinates, the gradient is given by: [19]

where r is the radial distance, φ is the azimuthal angle and θ is the polar angle, and er, eθ and eφ are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis).

For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).

General coordinates

We consider general coordinates, which we write as x1, …, xi, …, xn, where n is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so x2 refers to the second component—not the quantity x squared. The index variable i refers to an arbitrary element xi. Using Einstein notation, the gradient can then be written as:

( Note that its dual is ),

where and refer to the unnormalized local covariant and contravariant bases respectively, is the inverse metric tensor, and the Einstein summation convention implies summation over i and j.

If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as and , using the scale factors (also known as Lamé coefficients)  :

( and ),

where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, , , and are neither contravariant nor covariant.

The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.

Gradient and the derivative or differential

The gradient is closely related to the (total) derivative ((total) differential) : they are transpose (dual) to each other. Using the convention that vectors in are represented by column vectors, and that covectors (linear maps ) are represented by row vectors, [lower-alpha 1] the gradient and the derivative are expressed as a column and row vector, respectively, with the same components, but transpose of each other:

While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a linear form (covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, , while the derivative is a map from the tangent space to the real numbers, . The tangent spaces at each point of can be "naturally" identified [lower-alpha 4] with the vector space itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space of covectors; thus the value of the gradient at a point can be thought of a vector in the original , not just as a tangent vector.

Computationally, given a tangent vector, the vector can be multiplied by the derivative (as matrices), which is equal to taking the dot product with the gradient:

Differential or (exterior) derivative

The best linear approximation to a differentiable function

at a point x in Rn is a linear map from Rn to R which is often denoted by dfx or Df(x) and called the differential or (total) derivative of f at x. The function df, which maps x to dfx, is called the (total) differential or exterior derivative of f and is an example of a differential 1-form.

Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, [20] the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector.

The gradient is related to the differential by the formula

for any vRn, where is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector.

If Rn is viewed as the space of (dimension n) column vectors (of real numbers), then one can regard df as the row vector with components

so that dfx(v) is given by matrix multiplication. Assuming the standard Euclidean metric on Rn, the gradient is then the corresponding column vector, that is,

Linear approximation to a function

The best linear approximation to a function can be expressed in terms of the gradient, rather than the derivative. The gradient of a function f from the Euclidean space Rn to R at any particular point x0 in Rn characterizes the best linear approximation to f at x0. The approximation is as follows:

for x close to x0, where (∇f)x0 is the gradient of f computed at x0, and the dot denotes the dot product on Rn. This equation is equivalent to the first two terms in the multivariable Taylor series expansion of f at x0.

Gradient as a "derivative"

Let U be an open set in Rn. If the function f : UR is differentiable, then the differential of f is the (Fréchet) derivative of f. Thus f is a function from U to the space Rn such that

where · is the dot product.

As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative:

Linearity

The gradient is linear in the sense that if f and g are two real-valued functions differentiable at the point aRn, and α and β are two constants, then αf + βg is differentiable at a, and moreover

Product rule

If f and g are real-valued functions differentiable at a point aRn, then the product rule asserts that the product fg is differentiable at a, and

Chain rule

Suppose that f : AR is a real-valued function defined on a subset A of Rn, and that f is differentiable at a point a. There are two forms of the chain rule applying to the gradient. First, suppose that the function g is a parametric curve; that is, a function g : IRn maps a subset IR into Rn. If g is differentiable at a point cI such that g(c) = a, then

where ∘ is the composition operator: (f ∘ g)(x) = f(g(x)).

More generally, if instead IRk, then the following holds:

where (Dg)T denotes the transpose Jacobian matrix.

For the second form of the chain rule, suppose that h : IR is a real valued function on a subset I of R, and that h is differentiable at the point f(a) ∈ I. Then

Further properties and applications

Level sets

A level surface, or isosurface, is the set of all points where some function has a given value.

If f is differentiable, then the dot product (∇f)xv of the gradient at a point x with a vector v gives the directional derivative of f at x in the direction v. It follows that in this case the gradient of f is orthogonal to the level sets of f. For example, a level surface in three-dimensional space is defined by an equation of the form F(x, y, z) = c. The gradient of F is then normal to the surface.

More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form F(P) = 0 such that dF is nowhere zero. The gradient of F is then normal to the hypersurface.

Similarly, an affine algebraic hypersurface may be defined by an equation F(x1, ..., xn) = 0, where F is a polynomial. The gradient of F is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.

Conservative vector fields and the gradient theorem

The gradient of a function is called a gradient field. A (continuous) gradient field is always a conservative vector field: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.

Generalizations

Jacobian

The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. [21] [22] A further generalization for a function between Banach spaces is the Fréchet derivative.

Suppose f : ℝn → ℝm is a function such that each of its first-order partial derivatives exist on n. Then the Jacobian matrix of f is defined to be an m×n matrix, denoted by or simply . The (i,j)th entry is . Explicitly

Gradient of a vector field

Since the total derivative of a vector field is a linear mapping from vectors to vectors, it is a tensor quantity.

In rectangular coordinates, the gradient of a vector field f = (f1, f2, f3) is defined by:

(where the Einstein summation notation is used and the tensor product of the vectors ei and ek is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix:

In curvilinear coordinates, or more generally on a curved manifold, the gradient involves Christoffel symbols:

where gjk are the components of the inverse metric tensor and the ei are the coordinate basis vectors.

Expressed more invariantly, the gradient of a vector field f can be defined by the Levi-Civita connection and metric tensor: [23]

where c is the connection.

Riemannian manifolds

For any smooth function f on a Riemannian manifold (M, g), the gradient of f is the vector field f such that for any vector field X,

that is,

where gx( , ) denotes the inner product of tangent vectors at x defined by the metric g and Xf is the function that takes any point xM to the directional derivative of f in the direction X, evaluated at x. In other words, in a coordinate chart φ from an open subset of M to an open subset of Rn, (∂Xf)(x) is given by:

where Xj denotes the jth component of X in this coordinate chart.

So, the local form of the gradient takes the form:

Generalizing the case M = Rn, the gradient of a function is related to its exterior derivative, since

More precisely, the gradient f is the vector field associated to the differential 1-form df using the musical isomorphism

(called "sharp") defined by the metric g. The relation between the exterior derivative and the gradient of a function on Rn is a special case of this in which the metric is the flat metric given by the dot product.

See also

Notes

  1. 1 2 This article uses the convention that column vectors represent vectors, and row vectors represent covectors, but the opposite convention is also common.
  2. Strictly speaking, the gradient is a vector field , and the value of the gradient at a point is a tangent vector in the tangent space at that point, , not a vector in the original space . However, all the tangent spaces can be naturally identified with the original space , so these do not need to be distinguished; see § Definition and relationship with the derivative.
  3. The value of the gradient at a point can be thought of as a vector in the original space , while the value of the derivative at a point can be thought of as a covector on the original space: a linear map .
  4. Informally, "naturally" identified means that this can be done without making any arbitrary choices. This can be formalized with a natural transformation.

Related Research Articles

Curl (mathematics) Operator describing the rotation at a point in a 3D vector field

In vector calculus, the curl is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.

Derivative Operation in calculus

In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.

Divergence Vector operator producing the scalar quantity of a flow at a point

In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.

Vector field Assignment of a vector to each point in a subset of Euclidean space

In vector calculus and physics, a vector field is an assignment of a vector to each point in a subset of space. For instance, a vector field in the plane can be visualised as a collection of arrows with a given magnitude and direction, each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints. It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function.

Normal (geometry)

In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, in two dimensions, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point. A normal vector may have length one or its length may represent the curvature of the object ; its algebraic sign may indicate sides.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols ∇·∇, 2 or Δ. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf(p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f(p).

In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in Rn. More generally, tangent vectors are elements of a tangent space of a differentiable manifold. Tangent vectors can also be described in terms of germs. Formally, a tangent vector at the point is a linear derivation of the algebra defined by the set of germs at .

In vector calculus, the Jacobian matrix of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a Functional to a change in a function on which the functional depends.

Legendre transformation

In mathematics and physics, the Legendre transformation, named after Adrien-Marie Legendre, is an involutive transformation on the real-valued convex functions of one real variable. In physical problems, it is used to convert functions of one quantity into functions of the conjugate quantity. In this way, it is commonly used in classical mechanics to derive the Hamiltonian formalism out of the Lagrangian formalism and in thermodynamics to derive the thermodynamic potentials, as well as in the solution of differential equations of several variables.

In mathematics, and especially differential geometry and gauge theory, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. The most common case is that of a linear connection on a vector bundle, for which the notion of parallel transport must be linear. A linear connection is equivalently specified by a covariant derivative, an operator that differentiates sections of the bundle along tangent directions in the base manifold, in such a way that parallel sections have derivative zero. Linear connections generalize, to arbitrary vector bundles, the Levi-Civita connection on the tangent bundle of a Riemannian manifold, which gives a standard way to differentiate vector fields. Nonlinear connections generalize this concept to bundles whose fibers are not necessarily linear.

In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".

In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.

In mathematics, the directional derivative of a multivariate differentiable function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v. It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant.

In differential geometry, the Laplace–Beltrami operator is a generalization of the Laplace operator to functions defined on submanifolds in Euclidean space and, even more generally, on Riemannian and pseudo-Riemannian manifolds. It is named after Pierre-Simon Laplace and Eugenio Beltrami.

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

The following are important identities involving derivatives and integrals in vector calculus.

The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.

In mathematical analysis, and applications in geometry, applied mathematics, engineering, natural sciences, and economics, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex valued functions may be easily reduced to the study of the real valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real valued functions will be considered in this article.

References

Further reading