In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. [2] Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851. [3] [4]
Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are often used in physical and geometric problems. Associated with any coordinate system is a natural choice of coordinate basis for vectors based at each point of the space, and covariance and contravariance are particularly important for understanding how the coordinate description of a vector changes by passing from one coordinate system to another. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance.
In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (or tuple) of numbers such as
The numbers in the list depend on the choice of coordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the components v1, v2, and v3 are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a certain way in passing from one coordinate system to another.
A simple illustrative case is that of a Euclidean vector. For a vector, once a set of basis vectors has been defined, then the components of that vector will always vary opposite to that of the basis vectors. That vector is therefore defined as a contravariant tensor. Take a standard position vector for example. By changing the scale of the reference axes from meters to centimeters (that is, dividing the scale of the reference axes by 100, so that the basis vectors now are meters long), the components of the measured position vector are multiplied by 100. A vector's components change scale inversely to changes in scale to the reference axes, and consequently a vector is called a contravariant tensor.
A vector, which is an example of a contravariant tensor, has components that transform inversely to the transformation of the reference axes, (with example transformations including rotation and dilation). The vector itself does not change under these operations; instead, the components of the vector change in a way that cancels the change in the spatial axes. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that the basis vectors transform according to , then the components of a vector v in the original basis ( ) must be similarly transformed via . The components of a vector are often represented arranged in a column.
By contrast, a covector has components that transform like the reference axes. It lives in the dual vector space, and represents a linear map from vectors to scalars. The dot product operator involving vectors is a good example of a covector. To illustrate, assume we have a covector defined as , where is a vector. The components of this covector in some arbitrary basis are , with being the basis vectors in the corresponding vector space. (This can be derived by noting that we want to get the correct answer for the dot product operation when multiplying by an arbitrary vector , with components ). The covariance of these covector components is then seen by noting that if a transformation described by an invertible matrix M were to be applied to the basis vectors in the corresponding vector space, , then the components of the covector will transform with the same matrix , namely, . The components of a covector are often represented arranged in a row.
A third concept related to covariance and contravariance is invariance. A scalar (also called type-0 or rank-0 tensor) is an object that does not vary with the change in basis. An example of a physical observable that is a scalar is the mass of a particle. The single, scalar value of mass is independent to changes in basis vectors and consequently is called invariant. The magnitude of a vector (such as distance) is another example of an invariant, because it remains fixed even if geometrical vector components vary. (For example, for a position vector of length meters, if all Cartesian basis vectors are changed from meters in length to meters in length, the length of the position vector remains unchanged at meters, although the vector components will all increase by a factor of ). The scalar product of a vector and a covector is invariant, because one has components that vary with the base change, and the other has components that vary oppositely, and the two effects cancel out. One thus says that covectors are dual to vectors.
Thus, to summarize:
The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under a change of basis (passive transformation). Thus let V be a vector space of dimension n over a field of scalars S, and let each of f = (X1, ..., Xn) and f′ = (Y1, ..., Yn) be a basis of V. [note 1] Also, let the change of basis from f to f′ be given by
(1) |
for some invertible n×n matrix A with entries . Here, each vector Yj of the f′ basis is a linear combination of the vectors Xi of the f basis, so that
A vector in V is expressed uniquely as a linear combination of the elements of the f basis as
(2) |
where vi[f] are elements of the field S known as the components of v in the f basis. Denote the column vector of components of v by v[f]:
so that ( 2 ) can be rewritten as a matrix product
The vector v may also be expressed in terms of the f′ basis, so that
However, since the vector v itself is invariant under the choice of basis,
The invariance of v combined with the relationship ( 1 ) between f and f′ implies that
giving the transformation rule
In terms of components,
where the coefficients are the entries of the inverse matrix of A.
Because the components of the vector v transform with the inverse of the matrix A, these components are said to transform contravariantly under a change of basis.
The way A relates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change:
A linear functional α on V is expressed uniquely in terms of its components (elements in S) in the f basis as
These components are the action of α on the basis vectors Xi of the f basis.
Under the change of basis from f to f′ (via 1 ), the components transform so that
(3) |
Denote the row vector of components of α by α[f]:
so that ( 3 ) can be rewritten as the matrix product
Because the components of the linear functional α transform with the matrix A, these components are said to transform covariantly under a change of basis.
The way A relates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction:
Had a column vector representation been used instead, the transformation law would be the transpose
The choice of basis f on the vector space V defines uniquely a set of coordinate functions on V, by means of
The coordinates on V are therefore contravariant in the sense that
Conversely, a system of n quantities vi that transform like the coordinates xi on V defines a contravariant vector (or simply vector). A system of n quantities that transform oppositely to the coordinates is then a covariant vector (or covector).
This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (a manifold) on which vectors live as tangent vectors or cotangent vectors. Given a local coordinate system xi on the manifold, the reference axes for the coordinate system are the vector fields
This gives rise to the frame f = (X1, ..., Xn) at every point of the coordinate patch.
If yi is a different coordinate system and
then the frame f' is related to the frame f by the inverse of the Jacobian matrix of the coordinate transition:
Or, in indices,
A tangent vector is by definition a vector that is a linear combination of the coordinate partials . Thus a tangent vector is defined by
Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has
Therefore, the components of a tangent vector transform via
Accordingly, a system of n quantities vi depending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector.
In a finite-dimensional vector space V over a field K with a symmetric bilinear form g : V × V → K (which may be referred to as the metric tensor), there is little distinction between covariant and contravariant vectors, because the bilinear form allows covectors to be identified with vectors. That is, a vector v uniquely determines a covector α via
for all vectors w. Conversely, each covector α determines a unique vector v by this equation. Because of this identification of vectors with covectors, one may speak of the covariant components or contravariant components of a vector, that is, they are just representations of the same vector using the reciprocal basis.
Given a basis f = (X1, ..., Xn) of V, there is a unique reciprocal basis f# = (Y1, ..., Yn) of V determined by requiring that
the Kronecker delta. In terms of these bases, any vector v can be written in two ways:
The components vi[f] are the contravariant components of the vector v in the basis f, and the components vi[f] are the covariant components of v in the basis f. The terminology is justified because under a change of basis,
In the Euclidean plane, the dot product allows for vectors to be identified with covectors. If is a basis, then the dual basis satisfies
Thus, e1 and e2 are perpendicular to each other, as are e2 and e1, and the lengths of e1 and e2 normalized against e1 and e2, respectively.
For example, [5] suppose that we are given a basis e1, e2 consisting of a pair of vectors making a 45° angle with one another, such that e1 has length 2 and e2 has length 1. Then the dual basis vectors are given as follows:
Applying these rules, we find
and
Thus the change of basis matrix in going from the original basis to the reciprocal basis is
since
For instance, the vector
is a vector with contravariant components
The covariant components are obtained by equating the two expressions for the vector v:
so
In the three-dimensional Euclidean space, one can also determine explicitly the dual basis to a given set of basis vectors e1, e2, e3 of E3 that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are:
Even when the ei and ei are not orthonormal, they are still mutually reciprocal:
Then the contravariant components of any vector v can be obtained by the dot product of v with the dual basis vectors:
Likewise, the covariant components of v can be obtained from the dot product of v with basis vectors, viz.
Then v can be expressed in two (reciprocal) ways, viz.
or
Combining the above relations, we have
and we can convert between the basis and dual basis with
and
If the basis vectors are orthonormal, then they are the same as the dual basis vectors.
More generally, in an n-dimensional Euclidean space V, if a basis is
the reciprocal basis is given by (double indices are summed over),
where the coefficients gij are the entries of the inverse matrix of
Indeed, we then have
The covariant and contravariant components of any vector
are related as above by
and
The distinction between covariance and contravariance is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vector and covector components. The valence of a tensor is the number of covariant and contravariant terms, and in Einstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although modern differential geometry uses more sophisticated index-free methods to represent tensors.
In tensor analysis, a covariant vector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression.
On a manifold, a tensor field will typically have multiple, upper and lower indices, where Einstein notation is widely used. When the manifold is equipped with a metric, covariant and contravariant indices become very closely related to one another. Contravariant indices can be turned into covariant indices by contracting with the metric tensor. The reverse is possible by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates.
The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in the tangent bundle as well as the cotangent bundle.
A contravariant vector is one which transforms like , where are the coordinates of a particle at its proper time . A covariant vector is one which transforms like , where is a scalar field.
In category theory, there are covariant functors and contravariant functors. The assignment of the dual space to a vector space is a standard example of a contravariant functor. Contravariant (resp. covariant) vectors are contravariant (resp. covariant) functors from a -torsor to the fundamental representation of . Similarly, tensors of higher degree are functors with values in other representations of . However, some constructions of multilinear algebra are of "mixed" variance, which prevents them from being functors.
In differential geometry, the components of a vector relative to a basis of the tangent bundle are covariant if they change with the same linear transformation as a change of basis. They are contravariant if they change by the inverse transformation. This is sometimes a source of confusion for two distinct but related reasons. The first is that vectors whose components are covariant (called covectors or 1-forms) actually pull back under smooth functions, meaning that the operation assigning the space of covectors to a smooth manifold is actually a contravariant functor. Likewise, vectors whose components are contravariant push forward under smooth mappings, so the operation assigning the space of (contravariant) vectors to a smooth manifold is a covariant functor. Secondly, in the classical approach to differential geometry, it is not bases of the tangent bundle that are the most primitive object, but rather changes in the coordinate system. Vectors with contravariant components transform in the same way as changes in the coordinates (because these actually change oppositely to the induced change of basis). Likewise, vectors with covariant components transform in the opposite way as changes in the coordinates.
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
In mathematics, especially the usage of linear algebra in mathematical physics and differential geometry, Einstein notation is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916.
In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.
In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the canonical pairing of a vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices of the tensor are set equal to each other and summed over. In Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2.
In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.
In physics, a covariant transformation is a rule that specifies how certain entities, such as vectors or tensors, change under a change of basis. The transformation that describes the new basis vectors as a linear combination of the old basis vectors is defined as a covariant transformation. Conventionally, indices identifying the basis vectors are placed as lower indices and so are all entities that transform in the same way. The inverse of a covariant transformation is a contravariant transformation. Whenever a vector should be invariant under a change of basis, that is to say it should represent the same geometrical or physical object having the same magnitude and direction as before, its components must transform according to the contravariant rule. Conventionally, indices identifying the components of a vector are placed as upper indices and so are all indices of entities that transform in the same way. The sum over pairwise matching indices of a product with the same lower and upper indices is invariant under a transformation.
An electromagnetic four-potential is a relativistic vector function from which the electromagnetic field can be derived. It combines both an electric scalar potential and a magnetic vector potential into a single four-vector.
In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved.
In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.
In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another, except that it is additionally multiplied or weighted by a power W of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W are called tensor capacity. A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle.
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below.
In mathematics, orthogonal coordinates are defined as a set of d coordinates in which the coordinate hypersurfaces all meet at right angles (note that superscripts are indices, not exponents). A coordinate surface for a particular coordinate qk is the curve, surface, or hypersurface on which qk is a constant. For example, the three-dimensional Cartesian coordinates (x, y, z) is an orthogonal coordinate system, since its coordinate surfaces x = constant, y = constant, and z = constant are planes that meet at right angles to one another, i.e., are perpendicular. Orthogonal coordinates are a special but extremely common case of curvilinear coordinates.
In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.
The theory of special relativity plays an important role in the modern theory of classical electromagnetism. It gives formulas for how electromagnetic objects, in particular the electric and magnetic fields, are altered under a Lorentz transformation from one inertial frame of reference to another. It sheds light on the relationship between electricity and magnetism, showing that frame of reference determines if an observation follows electric or magnetic laws. It motivates a compact and convenient notation for the laws of electromagnetism, namely the "manifestly covariant" tensor form.
A system of skew coordinates is a curvilinear coordinate system where the coordinate surfaces are not orthogonal, in contrast to orthogonal coordinates.
In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus, developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century.
Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics.