Part of a series of articles about |
Calculus |
---|
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.
The Fréchet derivative defines the derivative for general normed vector spaces . Briefly, a function , an open subset of , is called Fréchet differentiable at if there exists a bounded linear operator such that
Functions are defined as being differentiable in some open neighbourhood of , rather than at individual points, as not doing so tends to lead to many pathological counterexamples.
The Fréchet derivative is quite similar to the formula for the derivative found in elementary one-variable calculus,
and simply moves A to the left hand side. However, the Fréchet derivative A denotes the function .
In multivariable calculus, in the context of differential equations defined by a vector valued function Rn to Rm, the Fréchet derivative A is a linear operator on R considered as a vector space over itself, and corresponds to the best linear approximation of a function. If such an operator exists, then it is unique, and can be represented by an m by n matrix known as the Jacobian matrix Jx(ƒ) of the mapping ƒ at point x. Each entry of this matrix represents a partial derivative, specifying the rate of change of one range coordinate with respect to a change in a domain coordinate. Of course, the Jacobian matrix of the composition g°f is a product of corresponding Jacobian matrices: Jx(g°f) =Jƒ(x)(g)Jx(ƒ). This is a higher-dimensional statement of the chain rule.
For real valued functions from Rn to R (scalar fields), the Fréchet derivative corresponds to a vector field called the total derivative. This can be interpreted as the gradient but it is more natural to use the exterior derivative.
The convective derivative takes into account changes due to time dependence and motion through space along a vector field, and is a special case of the total derivative.
For vector-valued functions from R to Rn (i.e., parametric curves), the Fréchet derivative corresponds to taking the derivative of each component separately. The resulting derivative can be mapped to a vector. This is useful, for example, if the vector-valued function is the position vector of a particle through time, then the derivative is the velocity vector of the particle through time.
In complex analysis, the central objects of study are holomorphic functions, which are complex-valued functions on the complex numbers where the Fréchet derivative exists.
In geometric calculus, the geometric derivative satisfies a weaker form of the Leibniz (product) rule. It specializes the Fréchet derivative to the objects of geometric algebra. Geometric calculus is a powerful formalism that has been shown to encompass the similar frameworks of differential forms and differential geometry. [1]
On the exterior algebra of differential forms over a smooth manifold, the exterior derivative is the unique linear map which satisfies a graded version of the Leibniz law and squares to zero. It is a grade 1 derivation on the exterior algebra. In R3, the gradient, curl, and divergence are special cases of the exterior derivative. An intuitive interpretation of the gradient is that it points "up": in other words, it points in the direction of fastest increase of the function. It can be used to calculate directional derivatives of scalar functions or normal directions. Divergence gives a measure of how much "source" or "sink" near a point there is. It can be used to calculate flux by divergence theorem. Curl measures how much "rotation" a vector field has near a point.
The Lie derivative is the rate of change of a vector or tensor field along the flow of another vector field. On vector fields, it is an example of a Lie bracket (vector fields form the Lie algebra of the diffeomorphism group of the manifold). It is a grade 0 derivation on the algebra.
Together with the interior product (a degree -1 derivation on the exterior algebra defined by contraction with a vector field), the exterior derivative and the Lie derivative form a Lie superalgebra.
In differential topology, a vector field may be defined as a derivation on the ring of smooth functions on a manifold, and a tangent vector may be defined as a derivation at a point. This allows the abstraction of the notion of a directional derivative of a scalar function to general manifolds. For manifolds that are subsets of Rn, this tangent vector will agree with the directional derivative.
The differential or pushforward of a map between manifolds is the induced map between tangent spaces of those maps. It abstracts the Jacobian matrix.
In differential geometry, the covariant derivative makes a choice for taking directional derivatives of vector fields along curves. This extends the directional derivative of scalar functions to sections of vector bundles or principal bundles. In Riemannian geometry, the existence of a metric chooses a unique preferred torsion-free covariant derivative, known as the Levi-Civita connection. See also gauge covariant derivative for a treatment oriented to physics.
The exterior covariant derivative extends the exterior derivative to vector valued forms.
Given a function which is locally integrable, but not necessarily classically differentiable, a weak derivative may be defined by means of integration by parts. First define test functions, which are infinitely differentiable and compactly supported functions , and multi-indices, which are length lists of integers with . Applied to test functions, . Then the weak derivative of exists if there is a function such that for all test functions , we have
If such a function exists, then , which is unique almost everywhere. This definition coincides with the classical derivative for functions , and can be extended to a type of generalized functions called distributions, the dual space of test functions. Weak derivatives are particularly useful in the study of partial differential equations, and within parts of functional analysis.
In the real numbers one can iterate the differentiation process, that is, apply derivatives more than once, obtaining derivatives of second and higher order. Higher derivatives can also be defined for functions of several variables, studied in multivariable calculus. In this case, instead of repeatedly applying the derivative, one repeatedly applies partial derivatives with respect to different variables. For example, the second order partial derivatives of a scalar function of n variables can be organized into an n by n matrix, the Hessian matrix. One of the subtle points is that the higher derivatives are not intrinsically defined, and depend on the choice of the coordinates in a complicated fashion (in particular, the Hessian matrix of a function is not a tensor). Nevertheless, higher derivatives have important applications to analysis of local extrema of a function at its critical points. For an advanced application of this analysis to topology of manifolds, see Morse theory.
In addition to n th derivatives for any natural number n, there are various ways to define derivatives of fractional or negative orders, which are studied in fractional calculus. The −1 order derivative corresponds to the integral, whence the term differintegral.
In quaternionic analysis, derivatives can be defined in a similar way to real and complex functions. Since the quaternions are not commutative, the limit of the difference quotient yields two different derivatives: A left derivative
and a right derivative
The existence of these limits are very restrictive conditions. For example, if has left-derivatives at every point on an open connected set , then for .
In algebra, generalizations of the derivative can be obtained by imposing the Leibniz rule of differentiation in an algebraic structure, such as a ring or a Lie algebra.
A derivation is a linear map on a ring or algebra which satisfies the Leibniz law (the product rule). Higher derivatives and algebraic differential operators can also be defined. They are studied in a purely algebraic setting in differential Galois theory and the theory of D-modules, but also turn up in many other areas, where they often agree with less algebraic definitions of derivatives.
For example, the formal derivative of a polynomial over a commutative ring R is defined by
The mapping is then a derivation on the polynomial ring R[X]. This definition can be extended to rational functions as well.
The notion of derivation applies to noncommutative as well as commutative rings, and even to non-associative algebraic structures, such as Lie algebras.
In type theory, many abstract data types can be described as the algebra generated by a transformation that maps structures based on the type back into the type. For example, the type T of binary trees containing values of type A can be represented as the algebra generated by the transformation 1+A×T2→T. The "1" represents the construction of an empty tree, and the second term represents the construction of a tree from a value and two subtrees. The "+" indicates that a tree can be constructed either way.
The derivative of such a type is the type that describes the context of a particular substructure with respect to its next outer containing structure. Put another way, it is the type representing the "difference" between the two. In the tree example, the derivative is a type that describes the information needed, given a particular subtree, to construct its parent tree. This information is a tuple that contains a binary indicator of whether the child is on the left or right, the value at the parent, and the sibling subtree. This type can be represented as 2×A×T, which looks very much like the derivative of the transformation that generated the tree type.
This concept of a derivative of a type has practical applications, such as the zipper technique used in functional programming languages.
A differential operator combines several derivatives, possibly of different orders, in one algebraic expression. This is especially useful in considering ordinary linear differential equations with constant coefficients. For example, if f(x) is a twice differentiable function of one variable, the differential equation may be rewritten in the form , where
is a second order linear constant coefficient differential operator acting on functions of x. The key idea here is that we consider a particular linear combination of zeroth, first and second order derivatives "all at once". This allows us to think of the set of solutions of this differential equation as a "generalized antiderivative" of its right hand side 4x − 1, by analogy with ordinary integration, and formally write
Combining derivatives of different variables results in a notion of a partial differential operator. The linear operator which assigns to each function its derivative is an example of a differential operator on a function space. By means of the Fourier transform, pseudo-differential operators can be defined which allow for fractional calculus.
Some of these operators are so important that they have their own names:
In functional analysis, the functional derivative defines the derivative with respect to a function of a functional on a space of functions. This is an extension of the directional derivative to an infinite dimensional vector space. An important case is the variational derivative in the calculus of variations.
The subderivative and subgradient are generalizations of the derivative to convex functions used in convex analysis.
In commutative algebra, Kähler differentials are universal derivations of a commutative ring or module. They can be used to define an analogue of exterior derivative from differential geometry that applies to arbitrary algebraic varieties, instead of just smooth manifolds.
In p-adic analysis, the usual definition of derivative is not quite strong enough, and one requires strict differentiability instead.
The Gateaux derivative extends the Fréchet derivative to locally convex topological vector spaces. Fréchet differentiability is a strictly stronger condition than Gateaux differentiability, even in finite dimensions. Between the two extremes is the quasi-derivative.
In measure theory, the Radon–Nikodym derivative generalizes the Jacobian, used for changing variables, to measures. It expresses one measure μ in terms of another measure ν (under certain conditions).
The H-derivative is a notion of derivative in the study of abstract Wiener spaces and the Malliavin calculus. It is used in the study of stochastic processes.
Laplacians and differential equations using the Laplacian can be defined on fractals. There is no completely satisfactory analog of the first-order derivative or gradient. [3]
The Carlitz derivative is an operation similar to usual differentiation but with the usual context of real or complex numbers changed to local fields of positive characteristic in the form of formal Laurent series with coefficients in some finite field Fq (it is known that any local field of positive characteristic is isomorphic to a Laurent series field). Along with suitably defined analogs to the exponential function, logarithms and others the derivative can be used to develop notions of smoothness, analycity, integration, Taylor series as well as a theory of differential equations. [4]
It may be possible to combine two or more of the above different notions of extension or abstraction of the original derivative. For example, in Finsler geometry, one studies spaces which look locally like Banach spaces. Thus one might want a derivative with some of the features of a functional derivative and the covariant derivative.
Multiplicative calculus replaces addition with multiplication, and hence rather than dealing with the limit of a ratio of differences, it deals with the limit of an exponentiation of ratios. This allows the development of the geometric derivative and bigeometric derivative. Moreover, just like the classical differential operator has a discrete analog, the difference operator, there are also discrete analogs of these multiplicative derivatives.
The derivative is a fundamental tool of calculus that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent. In coordinate-free terms, the gradient of a function may be defined by:
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator
In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.
A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.
Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices.
In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.
In mathematics, the total derivative of a function f at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when f is a function of several variables, because when f is a function of a single variable, the total derivative is the same as the ordinary derivative of the function.
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One may then apply ideas from calculus while working within the individual charts, since each chart lies within a vector space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart.
In mathematics, the Gateaux differential or Gateaux derivative is a generalization of the concept of directional derivative in differential calculus. Named after René Gateaux, a French mathematician who died at age 25 in World War I, it is defined for functions between locally convex topological vector spaces such as Banach spaces. Like the Fréchet derivative on a Banach space, the Gateaux differential is often used to formalize the functional derivative commonly used in the calculus of variations and physics.
In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.
In mathematics, the Fréchet derivative is a derivative defined on normed spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued function of multiple real variables, and to define the functional derivative used widely in the calculus of variations.
The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space rather than just the real line.
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.
In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.