# Tensor product

Last updated

In mathematics, the tensor product${\displaystyle V\otimes W}$ of two vector spaces V and W (over the same field) is a vector space which can be thought of as the space of all tensors that can be built from vectors from its constituent spaces using an additional operation which can be considered as a generalization and abstraction of the outer product. Because of the connection with tensors, which are the elements of a tensor product, tensor products find uses in many areas of application including in physics and engineering, though the full theoretical mechanics of them described below may not be commonly cited there. For example, in general relativity, the gravitational field is described through the metric tensor, which is a field (in the sense of physics) of tensors, one at each point in the space-time manifold, and each of which lives in the tensor self-product of tangent spaces ${\displaystyle T_{x}M}$ at its point of residence on the manifold (such a collection of tensor products attached to another space is called a tensor bundle).

## Tensors in finite dimensions, and the outer product

The concept of tensor product generalizes the idea of forming tensors from vectors using the outer product, which is an operation that can be defined in finite-dimensional vector spaces using matrices: given two vectors ${\displaystyle v\in V}$ and ${\displaystyle w\in W}$ written in terms of components, i.e.

${\displaystyle \mathbf {v} ={\begin{bmatrix}v_{1}\\v_{2}\\\cdots \\v_{n}\end{bmatrix}}}$

and

${\displaystyle \mathbf {w} ={\begin{bmatrix}w_{1}\\w_{2}\\\cdots \\w_{m}\end{bmatrix}}}$

their outer or Kronecker product is given by

${\displaystyle \mathbf {v} \otimes \mathbf {w} ={\begin{bmatrix}v_{1}w_{1}&&v_{1}w_{2}&&\cdots &&v_{1}w_{m}\\v_{2}w_{1}&&v_{2}w_{2}&&\cdots &&v_{2}w_{m}\\\cdots \\v_{n}w_{1}&&v_{n}w_{2}&&\cdots &&v_{n}w_{m}\end{bmatrix}}}$

or, in terms of elements, the ${\displaystyle ij}$-th component is

${\displaystyle (\mathbf {v} \otimes \mathbf {w} )_{ij}=v_{i}w_{j}}$

.

The matrix formed this way corresponds naturally to a tensor, where such is understood as a multilinear functional, by sandwiching it with matrix multiplication between a vector and its dual, or transpose:

${\displaystyle T(\mathbf {a} ^{T},\mathbf {b} ^{T}):=\mathbf {a} ^{T}(\mathbf {v} \otimes \mathbf {w} )\mathbf {b} }$

It is important to note that the tensor, as written, takes two dual vectors - this is an important point that will be dealt with later. In the case of finite dimensions, there is not a strong distinction between a space and its dual, however, it does matter in infinite dimensions and, moreover, getting the regular-vs-dual part right is essential to ensuring that the idea of tensors being developed here corresponds correctly to other senses in which they are viewed, such as in terms of transformations, which is common in physics.

The tensors constructed this way generate a vector space themselves when we add and scale them in the natural componentwise fashion and, in fact, all multilinear functionals of the type given can be written as some sum of outer products, which we may call pure tensors or simple tensors. This is sufficient to define the tensor product when we can write vectors and transformations in terms of matrices, however, to get a fully general operation, a more abstract approach will be required. Especially, we would like to isolate the "essential features" of the tensor product without having to specify a particular basis for its construction, and that is what we will do in the following sections.

## Abstracting the tensor product

To achieve that aim, the most natural way to proceed is to try and isolate an essential characterizing property which will describe, out of all possible vector spaces we could build from V and W, the one which (up to isomorphism) is their tensor product, and which will apply without consideration of any arbitrary choices such as a choice of basis. And the way to do this is to flip the tensor concept "inside out" - instead of viewing the tensors as an object which acts upon vectors in the manner of a bilinear map, we will view them instead as objects to be acted upon to produce a bilinear map. The trick is in recognizing that the Kronecker product "preserves all the information" regarding which vectors went into it: the ratios of vector components can be derived from

${\displaystyle {\frac {v_{j}}{v_{i}}}={\frac {v_{j}w_{k}}{v_{i}w_{k}}}={\frac {(\mathbf {v} \otimes \mathbf {w} )_{jk}}{(\mathbf {v} \otimes \mathbf {w} )_{ik}}}}$

and from those ratios, the individual components themselves recovered. As a result, a single Kronecker outer product can be used in lieu of the pair ${\displaystyle (v,w)}$ of vectors that formed it, and conversely. Most importantly, this means we can write any bilinear map ${\displaystyle f:V\times W\to Z,}$ for any third vector space Z, as a unilinear map ${\displaystyle f_{\operatorname {T} }:V\otimes W\to Z}$ where

${\displaystyle f_{T}(\mathbf {v} \otimes \mathbf {w} ):=f(\mathbf {v} ,\mathbf {w} )}$

The universal property, then, is that if we have the combining operation ${\displaystyle \,\otimes ,\,}$ and we are given any bilinear map of the form mentioned, there is exactly one such ${\displaystyle f_{\operatorname {T} }}$ that meets this requirement. This is not hard to see if we expand in terms of bases, but the more important point of it is that it can be used as a way to characterize the tensor product - that is, we can use it to define the tensor product axiomatically with no reference to such. However, before we can do that, we first need to show that the tensor product exists and is unique for all vector spaces V and W and, to do that, we need a construction.

## The constructive tensor product

### The free vector space

To perform such a construction, the first step we will consider involves introducing something called a "free vector space" over a given set. The thrust behind this idea basically consists of trying to consists of what we said in the first section above: since a generic tensor ${\displaystyle T}$ can be written by the double sum

${\displaystyle T=\sum _{i=1}^{n}\sum _{j=1}^{m}(v_{i}w_{j})(\mathbf {e} _{i}\otimes \mathbf {f} _{j})}$

the most natural way to approach this problem is somehow to figure out how we can "forget" about the specific choice of bases ${\displaystyle \mathbf {e} }$ and ${\displaystyle \mathbf {f} }$ that are used here. In mathematics, the way we "forget" about representational details of something is to establish an identification that tells us that two different things that are to be considered representations of the same thing are in fact such, i.e. which, given those says either "yes, they are" or "no, they aren't", and then "lump together" all representations as constituting the "thing represented" without reference to any one in particular by packaging them all together into a single set. In formal terms, we first build an equivalence relation, and then take the quotient set by that relation.

But before we can do that, we first need to develop what we are going to take the equivalence relation over. The way we do that is to approach this the other way around, from the "bottom up": since we are not guaranteed a, at least constructible, basis when starting from arbitrary vector spaces, we might instead try to start by guaranteeing we have one—that is, we will start first by considering a "basis", on its own, as given, and then building the vector space on top. To that end, we accomplish the following: suppose that ${\displaystyle B}$ is some set, which we could call an abstract basis set. Now consider all formal expressions of the form

${\displaystyle \mathbf {v} =a_{1}\beta _{1}+a_{2}\beta _{2}+\cdots +a_{n}\beta _{n}}$

of arbitrary, but finite, length ${\displaystyle n}$ and for which ${\displaystyle a_{j}}$ are scalars and ${\displaystyle \beta _{j}}$ are members of ${\displaystyle B.}$ Intuitively, this is a linear combination of the basis vectors in the usual sense of expanding an element of a vector space. We call this a "formal expression" because technically it is illegal to multiply ${\displaystyle a_{j}\beta _{j}}$ since there is no defined multiplication operation by default on an arbitrary set and arbitrary field of scalars. Instead, we will "pretend" (similar to defining the imaginary numbers) that this refers to something, and then will go about manipulating it according to the rules we expect for a vector space, e.g. the sum of two such strings using the same sequence of members of ${\displaystyle B}$ is

${\displaystyle (a_{1}\beta _{1}+a_{2}\beta _{2}+\cdots +a_{n}\beta _{n})+(b_{1}\beta _{1}+b_{2}\beta _{2}+\cdots +b_{n}\beta _{n})=(a_{1}+b_{1})\beta _{1}+(a_{2}+b_{2})\beta _{2}+\cdots +(a_{n}+b_{n})\beta _{n}}$

where we have used the associative, commutative, and distributive laws to rearrange the first sum into the second. Continuing this way for scalar multiples and all different-length combinations of vectors allows us to build up a vector addition and scalar multiplication on this set of formal expressions, and we call it the free vector space over ${\displaystyle B,}$ writing ${\displaystyle F(B).}$ Note that the elements of ${\displaystyle B,}$ considered as length-one formal expressions with coefficient 1 out front, form a Hamel basis for this space.

The tensor product expression is then abstracted by considering that if ${\displaystyle \beta _{j}}$ and ${\displaystyle \gamma _{j}}$ represent "abstract basis vectors" from two sets ${\displaystyle B}$ and ${\displaystyle G,}$ i.e. that "${\displaystyle \beta _{j}=\mathbf {e} _{j}}$" and "${\displaystyle \gamma _{j}=\mathbf {f} _{j}}$", then pairs of these in the Cartesian product ${\displaystyle B\times G,}$ i.e. ${\displaystyle (\beta _{i},\gamma _{j})}$ are taken as standing for the tensor products ${\displaystyle \mathbf {e} _{i}\otimes \mathbf {f} _{j}.}$ (Note that the tensor products in the expression are in some sense "atomic", i.e. additions and scalar multiplications do not split them up into anything else, so we can replace them with something different without altering the mathematical structure.) With such an identification, we can thus define the tensor product of two free vector spaces ${\displaystyle F(B)}$ and ${\displaystyle F(G)}$ as being something (yet to be decided) that is isomorphic to ${\displaystyle F(B\times G).}$

### The equivalence relation

The above definition will work for any vector space in which we can specify a basis, since we can just rebuild it as the free vector space over that basis: the above construction exactly mirrors how you represent vectors via the Hamel basis construction by design. In effect, we haven't gained anything ... until we do this.

Now, we are not assuming access to bases for vector spaces ${\displaystyle V}$ and ${\displaystyle W}$ that we want to form the tensor product ${\displaystyle V\otimes W}$ of. Instead, we will take all of ${\displaystyle V}$ and ${\displaystyle W}$ as "basis" to build up the tensors. This is the next best thing and the one thing we are guaranteed to be able to do, regardless of any concerns in finding a specific basis; this corresponds to adding together arbitrary outer products ${\displaystyle \mathbf {v} \otimes \mathbf {w} }$ of arbitrary vectors in the last part of the "Intuitive motivation" section. The only difference here is that if we use the free vector space construction and form the obvious ${\displaystyle F(V)\otimes F(W)=F(V\times W),}$ it will have many redundant versions of what should be the same tensor; going back to our basisful case if we consider the example where ${\displaystyle V=W=\mathbb {R} ^{2}}$ in the standard basis, we may consider that the tensor formed by the vectors ${\displaystyle \mathbf {x} ={\begin{bmatrix}0&3\end{bmatrix}}^{\mathsf {T}}}$ and ${\displaystyle \mathbf {y} ={\begin{bmatrix}5&-3\end{bmatrix}}^{\mathsf {T}},}$ i.e.

${\displaystyle T:=\mathbf {x} \otimes \mathbf {y} ={\begin{bmatrix}0&0\\15&-9\end{bmatrix}},}$

could also be represented by other sums, such as the sum using individual basic tensors ${\displaystyle \mathbf {e} _{i}\otimes \mathbf {e} _{j},}$ e.g.

${\displaystyle T=0\left(\mathbf {e} _{1}\otimes \mathbf {e} _{1}\right)+0\left(\mathbf {e} _{1}\otimes \mathbf {e} _{2}\right)+15\left(\mathbf {e} _{2}\otimes \mathbf {e} _{1}\right)-9\left(\mathbf {e} _{2}\otimes \mathbf {e} _{2}\right).}$

These, while equal expressions in the concrete case, would correspond to distinct elements of the free vector space ${\displaystyle F(V\times W),}$ namely

${\displaystyle T=(x,y)}$

in the first case and

${\displaystyle T=0(e_{1},e_{1})+0(e_{1},e_{2})+15(e_{2},e_{1})-9(e_{2},e_{2})}$

in the second case. Thus we must condense them—this is where the equivalence relation comes into play. The trick to building it is to note that given any vector ${\displaystyle \mathbf {x} }$ in a vector space, it is always possible to represent it as the sum of two other vectors ${\displaystyle \mathbf {a} }$ and ${\displaystyle \mathbf {b} }$ not equal to the original. If nothing else, let ${\displaystyle \mathbf {a} }$ be any vector and then take ${\displaystyle \mathbf {b}$ :=\mathbf {x} -\mathbf {a} }—which also shows that if we are given one vector and then a second vector, we can write the first vector in terms of the second together with a suitable third vector (indeed in many ways—just consider scalar multiples of the second vector in the same subtraction.).

This is useful to us because the outer product satisfies the following linearity properties, which can be proven by simple algebra on the corresponding matrix expressions:

{\displaystyle {\begin{aligned}(\mathbf {u} \otimes \mathbf {v} )^{\mathsf {T}}&=(\mathbf {v} \otimes \mathbf {u} )\\(\mathbf {v} +\mathbf {w} )\otimes \mathbf {u} &=\mathbf {v} \otimes \mathbf {u} +\mathbf {w} \otimes \mathbf {u} \\\mathbf {u} \otimes (\mathbf {v} +\mathbf {w} )&=\mathbf {u} \otimes \mathbf {v} +\mathbf {u} \otimes \mathbf {w} \\c(\mathbf {v} \otimes \mathbf {u} )&=(c\mathbf {v} )\otimes \mathbf {u} =\mathbf {v} \otimes (c\mathbf {u} )\end{aligned}}}

If we want to relate the outer product ${\displaystyle \mathbf {v} \otimes \mathbf {w} }$ to, say, ${\displaystyle \mathbf {e_{1}} \otimes \mathbf {w} ,}$ we can use the first relation above together with a suitable expression of ${\displaystyle \mathbf {v} }$ as a sum of some vector and some scalar multiple of ${\displaystyle \mathbf {e_{1}} .}$

Equality between two concrete tensors is then obtained if using the above rules will permit us to rearrange one sum of outer products into the other by suitably decomposing vectors—regardless of if we have a set of actual basis vectors. Applying that to our example above, we see that of course we have

{\displaystyle {\begin{aligned}\mathbf {x} &=0\mathbf {e} _{1}+3\mathbf {e} _{2}\\\mathbf {y} &=5\mathbf {e} _{1}-3\mathbf {e} _{2}\end{aligned}}}

for which substitution in

${\displaystyle T=\mathbf {x} \otimes \mathbf {y} }$

gives us

${\displaystyle T=\left(0\mathbf {e} _{1}+3\mathbf {e} _{2})\otimes (5\mathbf {e} _{1}-3\mathbf {e} _{2}\right)}$

and judicious use of the distributivity properties lets us rearrange to the desired form. Likewise, there is a corresponding "mirror" manipulation in terms of the free vector space elements ${\displaystyle (x,y)}$ and ${\displaystyle (e_{1},e_{1}),}$${\displaystyle (e_{1},e_{2}),}$ etc., and this finally leads us to the formal definition of the tensor product.

### Putting all the construction together

The abstract tensor product of two vector spaces ${\displaystyle V}$ and ${\displaystyle W}$ over a common base field ${\displaystyle K}$ is the quotient vector space

${\displaystyle V\otimes W:=F(V\times W)/{\sim }}$

where ${\displaystyle \sim }$ is the equivalence relation of formal equality generated by assuming that, for each ${\displaystyle (v,w)}$ and ${\displaystyle (v',w')}$ taken as formal expressions in the free vector space ${\displaystyle F(V\times W),}$ the following hold:

Identity. ${\displaystyle (v,w)\sim (v,w).}$
Symmetry. ${\displaystyle (v,w)\sim (v',w')}$ implies ${\displaystyle (v',w')\sim (v,w).}$
Transitivity. ${\displaystyle (v,w)\sim (v',w')}$ and ${\displaystyle (v',w')\sim (v'',w'')}$ implies ${\displaystyle (v,w)\sim (v'',w'').}$
Distributivity.${\displaystyle (v,w)+(v',w)\sim (v+v',w)}$ and ${\displaystyle (v,w)+(v,w')\sim (v,w+w').}$
Scalar multiples.${\displaystyle c(v,w)\sim (cv,w)}$ and ${\displaystyle c(v,w)\sim (v,cw).}$

and then testing equivalence of generic formal expressions through suitable manipulations based thereupon.[ citation needed ] Arithmetic is defined on the tensor product by choosing representative elements, applying the arithmetical rules, and finally taking the equivalence class. Moreover, given any two vectors ${\displaystyle v\in V}$ and ${\displaystyle w\in W,}$ the equivalence class ${\displaystyle [(v,w)]}$ is denoted ${\displaystyle v\otimes w.}$

## Properties

### Notation

Elements of ${\displaystyle V\otimes W}$ are often referred to as tensors, although this term refers to many other related concepts as well. [1] If v belongs to V and w belongs to W, then the equivalence class of (v, w) is denoted by ${\displaystyle v\otimes w,}$ which is called the tensor product of v with w. In physics and engineering, this use of the ${\displaystyle \,\otimes \,}$ symbol refers specifically to the outer product operation; the result of the outer product ${\displaystyle v\otimes w}$ is one of the standard ways of representing the equivalence class ${\displaystyle v\otimes w.}$ [2] An element of ${\displaystyle V\otimes W}$ that can be written in the form ${\displaystyle v\otimes w}$ is called a pure or simple tensor . In general, an element of the tensor product space is not a pure tensor, but rather a finite linear combination of pure tensors. For example, if ${\displaystyle v_{1}}$ and ${\displaystyle v_{2}}$ are linearly independent, and ${\displaystyle w_{1}}$ and ${\displaystyle w_{2}}$ are also linearly independent, then ${\displaystyle v_{1}\otimes w_{1}+v_{2}\otimes w_{2}}$ cannot be written as a pure tensor. The number of simple tensors required to express an element of a tensor product is called the tensor rank (not to be confused with tensor order, which is the number of spaces one has taken the product of, in this case 2; in notation, the number of indices), and for linear operators or matrices, thought of as (1, 1) tensors (elements of the space ${\displaystyle V\otimes V^{*}}$), it agrees with matrix rank.

### Dimension

Given bases ${\displaystyle \left\{v_{i}\right\}}$ and ${\displaystyle \left\{w_{j}\right\}}$ for V and W respectively, the tensors ${\displaystyle \left\{v_{i}\otimes w_{j}\right\}}$ form a basis for ${\displaystyle V\otimes W.}$ Therefore, if V and W are finite-dimensional, the dimension of the tensor product is the product of dimensions of the original spaces; for instance ${\displaystyle \mathbb {R} ^{m}\otimes \mathbb {R} ^{n}}$ is isomorphic to ${\displaystyle \mathbb {R} ^{mn}.}$

### Tensor product of linear maps

The tensor product also operates on linear maps between vector spaces. Specifically, given two linear maps ${\displaystyle S:V\to X}$ and ${\displaystyle T:W\to Y}$ between vector spaces, the tensor product of the two linear mapsS and T is a linear map

${\displaystyle S\otimes T:V\otimes W\to X\otimes Y}$

defined by

${\displaystyle (S\otimes T)(v\otimes w)=S(v)\otimes T(w).}$

In this way, the tensor product becomes a bifunctor from the category of vector spaces to itself, covariant in both arguments. [3]

If S and T are both injective, surjective or (in the case that V, X, W, and Y are normed vector spaces or topological vector spaces) continuous, then ${\displaystyle S\otimes T}$ is injective, surjective or continuous, respectively.

By choosing bases of all vector spaces involved, the linear maps S and T can be represented by matrices. Then, depending on how the tensor ${\displaystyle v\otimes w}$ is vectorized, the matrix describing the tensor product ${\displaystyle S\otimes T}$ is the Kronecker product of the two matrices. For example, if V, X, W, and Y above are all two-dimensional and bases have been fixed for all of them, and S and T are given by the matrices

${\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}},\qquad B={\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}},}$

respectively, then the tensor product of these two matrices is

${\displaystyle {\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}}\otimes {\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}={\begin{bmatrix}a_{1,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{1,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\[3pt]a_{2,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{2,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\\end{bmatrix}}={\begin{bmatrix}a_{1,1}b_{1,1}&a_{1,1}b_{1,2}&a_{1,2}b_{1,1}&a_{1,2}b_{1,2}\\a_{1,1}b_{2,1}&a_{1,1}b_{2,2}&a_{1,2}b_{2,1}&a_{1,2}b_{2,2}\\a_{2,1}b_{1,1}&a_{2,1}b_{1,2}&a_{2,2}b_{1,1}&a_{2,2}b_{1,2}\\a_{2,1}b_{2,1}&a_{2,1}b_{2,2}&a_{2,2}b_{2,1}&a_{2,2}b_{2,2}\\\end{bmatrix}}.}$

The resultant rank is at most 4, and thus the resultant dimension is 4. Note that rank here denotes the tensor rank i.e. the number of requisite indices (while the matrix rank counts the number of degrees of freedom in the resulting array). Note ${\displaystyle \operatorname {Tr} A\otimes B=\operatorname {Tr} A\times \operatorname {Tr} B.}$

A dyadic product is the special case of the tensor product between two vectors of the same dimension.

### Universal property

In the context of vector spaces, the tensor product ${\displaystyle V\otimes W}$ and the associated bilinear map ${\displaystyle \varphi :V\times W\to V\otimes W}$ are characterized up to isomorphism by a universal property regarding bilinear maps. (Recall that a bilinear map is a function that is separately linear in each of its arguments.) Informally, ${\displaystyle \varphi }$ is the most general bilinear map out of ${\displaystyle V\times W.}$

The vector space ${\displaystyle V\otimes W}$ and the associated bilinear map ${\displaystyle \varphi :V\times W\to V\otimes W}$ have the property that any bilinear map ${\displaystyle h:V\times W\to Z}$ from ${\displaystyle V\times W}$ to any vector space ${\displaystyle Z}$ factors through ${\displaystyle \varphi }$ uniquely. By saying "${\displaystyle h}$ factors through ${\displaystyle \varphi }$ uniquely", we mean that there is a unique linear map ${\displaystyle {\tilde {h}}:V\otimes W\to Z}$ such that ${\displaystyle h={\tilde {h}}\circ \varphi .}$

This characterization can simplify proofs about the tensor product. For example, the tensor product is symmetric, meaning there is a canonical isomorphism:

${\displaystyle V\otimes W\cong W\otimes V.}$

To construct, say, a map from ${\displaystyle V\otimes W}$ to ${\displaystyle W\otimes V,}$ it suffices to give a bilinear map ${\displaystyle h:V\times W\to W\otimes V}$ that maps ${\displaystyle (v,w)}$ to ${\displaystyle w\otimes v.}$ Then the universal property of ${\displaystyle V\otimes W}$ means ${\displaystyle h}$ factors into a map ${\displaystyle {\tilde {h}}:V\otimes W\to W\otimes V.}$ A map ${\displaystyle {\tilde {g}}:W\otimes V\to V\otimes W}$ in the opposite direction is similarly defined, and one checks that the two linear maps ${\displaystyle {\tilde {h}}}$ and ${\displaystyle {\tilde {g}}}$ are inverse to one another by again using their universal properties.

The universal property is extremely useful in showing that a map to a tensor product is injective. For example, suppose we want to show that ${\displaystyle \mathbb {R} \otimes \mathbb {R} }$ is isomorphic to ${\displaystyle \mathbb {R} .}$ Since all simple tensors are of the form ${\displaystyle a\otimes b=(ab)\otimes 1,}$ and hence all elements of the tensor product are of the form ${\displaystyle x\otimes 1}$ by additivity in the first coordinate, we have a natural candidate for an isomorphism ${\displaystyle \mathbb {R} \rightarrow \mathbb {R} \otimes \mathbb {R} }$ given by mapping ${\displaystyle x}$ to ${\displaystyle x\otimes 1,}$ and this map is trivially surjective.

Showing injectivity directly would involve somehow showing that there are no non-trivial relationships between ${\displaystyle x\otimes 1}$ and ${\displaystyle y\otimes 1}$ for ${\displaystyle x\neq y,}$ which seems daunting. However, we know that there is a bilinear map ${\displaystyle \mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R} }$ given by multiplying the coordinates together, and the universal property of the tensor product then furnishes a map of vector spaces ${\displaystyle \mathbb {R} \otimes \mathbb {R} \rightarrow \mathbb {R} }$ which maps ${\displaystyle x\otimes 1}$ to ${\displaystyle x,}$ and hence is an inverse of the previously constructed homomorphism, immediately implying the desired result. Note that, a priori, it is not even clear that this inverse map is well-defined, but the universal property and associated bilinear map together imply this is the case.

Similar reasoning can be used to show that the tensor product is associative, that is, there are natural isomorphisms

${\displaystyle V_{1}\otimes \left(V_{2}\otimes V_{3}\right)\cong \left(V_{1}\otimes V_{2}\right)\otimes V_{3}.}$

Therefore, it is customary to omit the parentheses and write ${\displaystyle V_{1}\otimes V_{2}\otimes V_{3}.}$

The category of vector spaces with tensor product is an example of a symmetric monoidal category.

The universal-property definition of a tensor product is valid in more categories than just the category of vector spaces. Instead of using multilinear (bilinear) maps, the general tensor product definition uses multimorphisms. [4]

### Tensor powers and braiding

Let n be a non-negative integer. The nth tensor power of the vector space V is the n-fold tensor product of V with itself. That is

${\displaystyle V^{\otimes n}\;{\overset {\mathrm {def} }{=}}\;\underbrace {V\otimes \cdots \otimes V} _{n}.}$

A permutation ${\displaystyle \sigma }$ of the set ${\displaystyle \{1,2,\ldots ,n\}}$ determines a mapping of the nth Cartesian power of V as follows:

${\displaystyle {\begin{cases}\sigma :V^{n}\to V^{n}\\\sigma \left(v_{1},v_{2},\ldots ,v_{n}\right)=\left(v_{\sigma (1)},v_{\sigma (2)},\ldots ,v_{\sigma (n)}\right)\end{cases}}}$

Let

${\displaystyle \varphi :V^{n}\to V^{\otimes n}}$

be the natural multilinear embedding of the Cartesian power of V into the tensor power of V. Then, by the universal property, there is a unique isomorphism

${\displaystyle \tau _{\sigma }:V^{\otimes n}\to V^{\otimes n}}$

such that

${\displaystyle \varphi \circ \sigma =\tau _{\sigma }\circ \varphi .}$

The isomorphism ${\displaystyle \tau _{\sigma }}$ is called the braiding map associated to the permutation ${\displaystyle \sigma .}$

## Product of tensors

For non-negative integers r and s a type ${\displaystyle (r,s)}$ tensor on a vector space V is an element of

${\displaystyle T_{s}^{r}(V)=\underbrace {V\otimes \cdots \otimes V} _{r}\otimes \underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{s}=V^{\otimes r}\otimes \left(V^{*}\right)^{\otimes s}.}$

Here ${\displaystyle V^{*}}$ is the dual vector space (which consists of all linear maps f from V to the ground field K).

There is a product map, called the (tensor) product of tensors [5]

${\displaystyle T_{s}^{r}(V)\otimes _{K}T_{s'}^{r'}(V)\to T_{s+s'}^{r+r'}(V).}$

It is defined by grouping all occurring "factors" V together: writing ${\displaystyle v_{i}}$ for an element of V and ${\displaystyle f_{i}}$ for an element of the dual space,

${\displaystyle (v_{1}\otimes f_{1})\otimes (v'_{1})=v_{1}\otimes v'_{1}\otimes f_{1}.}$

Picking a basis of V and the corresponding dual basis of ${\displaystyle V^{*}}$ naturally induces a basis for ${\displaystyle T_{s}^{r}(V)}$ (this basis is described in the article on Kronecker products). In terms of these bases, the components of a (tensor) product of two (or more) tensors can be computed. For example, if F and G are two covariant tensors of orders m and n respectively (i.e. ${\displaystyle F\in T_{m}^{0}}$ and ${\displaystyle G\in T_{n}^{0}}$), then the components of their tensor product are given by [6]

${\displaystyle (F\otimes G)_{i_{1}i_{2}\cdots i_{m+n}}=F_{i_{1}i_{2}\cdots i_{m}}G_{i_{m+1}i_{m+2}i_{m+3}\cdots i_{m+n}}.}$

Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let U be a tensor of type (1, 1) with components ${\displaystyle U_{\beta }^{\alpha },}$ and let V be a tensor of type ${\displaystyle (1,0)}$ with components ${\displaystyle V^{\gamma }.}$ Then

${\displaystyle \left(U\otimes V\right)^{\alpha }{}_{\beta }{}^{\gamma }=U^{\alpha }{}_{\beta }V^{\gamma }}$

and

${\displaystyle (V\otimes U)^{\mu \nu }{}_{\sigma }=V^{\mu }U^{\nu }{}_{\sigma }.}$

Tensors equipped with their product operation form an algebra, called the tensor algebra.

### Evaluation map and tensor contraction

For tensors of type (1, 1) there is a canonical evaluation map

${\displaystyle V\otimes V^{*}\to K}$

defined by its action on pure tensors:

${\displaystyle v\otimes f\mapsto f(v).}$

More generally, for tensors of type ${\displaystyle (r,s),}$ with r, s > 0, there is a map, called tensor contraction,

${\displaystyle T_{s}^{r}(V)\to T_{s-1}^{r-1}(V).}$

(The copies of ${\displaystyle V}$ and ${\displaystyle V^{*}}$ on which this map is to be applied must be specified.)

On the other hand, if ${\displaystyle V}$ is finite-dimensional, there is a canonical map in the other direction (called the coevaluation map)

${\displaystyle {\begin{cases}K\to V\otimes V^{*}\\\lambda \mapsto \sum _{i}\lambda v_{i}\otimes v_{i}^{*}\end{cases}}}$

where ${\displaystyle v_{1},\ldots ,v_{n}}$ is any basis of ${\displaystyle V,}$ and ${\displaystyle v_{i}^{*}}$ is its dual basis. This map does not depend on the choice of basis. [7]

The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases. [8]

The tensor product ${\displaystyle T_{s}^{r}(V)}$ may be naturally viewed as a module for the Lie algebra ${\displaystyle \mathrm {End} (V)}$ by means of the diagonal action: for simplicity let us assume ${\displaystyle r=s=1,}$ then, for each ${\displaystyle u\in \mathrm {End} (V),}$

${\displaystyle u(a\otimes b)=u(a)\otimes b-a\otimes u^{*}(b),}$

where ${\displaystyle u^{*}\in \mathrm {End} \left(V^{*}\right)}$ is the transpose of u, that is, in terms of the obvious pairing on ${\displaystyle V\otimes V^{*},}$

${\displaystyle \langle u(a),b\rangle =\langle a,u^{*}(b)\rangle .}$

There is a canonical isomorphism ${\displaystyle T_{1}^{1}(V)\to \mathrm {End} (V)}$ given by

${\displaystyle (a\otimes b)(x)=\langle x,b\rangle a.}$

Under this isomorphism, every u in ${\displaystyle \mathrm {End} (V)}$ may be first viewed as an endomorphism of ${\displaystyle T_{1}^{1}(V)}$ and then viewed as an endomorphism of ${\displaystyle \mathrm {End} (V).}$ In fact it is the adjoint representation ad(u) of ${\displaystyle \mathrm {End} (V).}$

## Relation of tensor product to Hom

Given two finite dimensional vector spaces U, V over the same field K, denote the dual space of U as U*, and the K-vector space of all linear maps from U to V as Hom(U,V). There is an isomorphism,

${\displaystyle U^{*}\otimes V\cong \mathrm {Hom} (U,V),}$

defined by an action of the pure tensor ${\displaystyle f\otimes v\in U^{*}\otimes V}$ on an element of ${\displaystyle U,}$

${\displaystyle (f\otimes v)(u)=f(u)v.}$

Its "inverse" can be defined using a basis ${\displaystyle \{u_{i}\}}$ and its dual basis ${\displaystyle \{u_{i}^{*}\}}$ as in the section "Evaluation map and tensor contraction" above:

${\displaystyle {\begin{cases}\mathrm {Hom} (U,V)\to U^{*}\otimes V\\F\mapsto \sum _{i}u_{i}^{*}\otimes F(u_{i}).\end{cases}}}$

This result implies

${\displaystyle \dim(U\otimes V)=\dim(U)\dim(V),}$

which automatically gives the important fact that ${\displaystyle \{u_{i}\otimes v_{j}\}}$ forms a basis for ${\displaystyle U\otimes V}$ where ${\displaystyle \{u_{i}\},\{v_{j}\}}$ are bases of U and V.

Furthermore, given three vector spaces U, V, W the tensor product is linked to the vector space of all linear maps, as follows:

${\displaystyle \mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathrm {Hom} (V,W)).}$

This is an example of adjoint functors: the tensor product is "left adjoint" to Hom.

## Tensor products of modules over a ring

The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field:

${\displaystyle A\otimes _{R}B:=F(A\times B)/G}$

where now ${\displaystyle F(A\times B)}$ is the free R-module generated by the cartesian product and G is the R-module generated by the same relations as above.

More generally, the tensor product can be defined even if the ring is non-commutative. In this case A has to be a right-R-module and B is a left-R-module, and instead of the last two relations above, the relation

${\displaystyle (ar,b)-(a,rb)}$

is imposed. If R is non-commutative, this is no longer an R-module, but just an abelian group.

The universal property also carries over, slightly modified: the map ${\displaystyle \varphi :A\times B\to A\otimes _{R}B}$ defined by ${\displaystyle (a,b)\mapsto a\otimes b}$ is a middle linear map (referred to as "the canonical middle linear map". [9] ); that is, it satisfies: [10]

{\displaystyle {\begin{aligned}\phi (a+a',b)&=\phi (a,b)+\phi (a',b)\\\phi (a,b+b')&=\phi (a,b)+\phi (a,b')\\\phi (ar,b)&=\phi (a,rb)\end{aligned}}}

The first two properties make φ a bilinear map of the abelian group ${\displaystyle A\times B.}$ For any middle linear map ${\displaystyle \psi }$ of ${\displaystyle A\times B,}$ a unique group homomorphism f of ${\displaystyle A\otimes _{R}B}$ satisfies ${\displaystyle \psi =f\circ \varphi ,}$ and this property determines ${\displaystyle \phi }$ within group isomorphism. See the main article for details.

### Tensor product of modules over a non-commutative ring

Let A be a right R-module and B be a left R-module. Then the tensor product of A and B is an abelian group defined by

${\displaystyle A\otimes _{R}B:=F(A\times B)/G}$

where ${\displaystyle F(A\times B)}$ is a free abelian group over ${\displaystyle A\times B}$ and G is the subgroup of ${\displaystyle F(A\times B)}$ generated by relations

{\displaystyle {\begin{aligned}&\forall a,a_{1},a_{2}\in A,\forall b,b_{1},b_{2}\in B,{\text{ for all }}r\in R:\\&(a_{1},b)+(a_{2},b)-(a_{1}+a_{2},b),\\&(a,b_{1})+(a,b_{2})-(a,b_{1}+b_{2}),\\&(ar,b)-(a,rb).\\\end{aligned}}}

The universal property can be stated as follows. Let G be an abelian group with a map ${\displaystyle q:A\times B\to G}$ that is bilinear, in the sense that

{\displaystyle {\begin{aligned}q(a_{1}+a_{2},b)&=q(a_{1},b)+q(a_{2},b),\\q(a,b_{1}+b_{2})&=q(a,b_{1})+q(a,b_{2}),\\q(ar,b)&=q(a,rb).\end{aligned}}}

Then there is a unique map ${\displaystyle {\overline {q}}:A\otimes B\to G}$ such that ${\displaystyle {\overline {q}}(a\otimes b)=q(a,b)}$ for all ${\displaystyle a\in A}$ and ${\displaystyle b\in B.}$

Furthermore, we can give ${\displaystyle A\otimes _{R}B}$ a module structure under some extra conditions:

1. If A is a (S,R)-bimodule, then ${\displaystyle A\otimes _{R}B}$ is a left S-module where ${\displaystyle s(a\otimes b):=(sa)\otimes b.}$
2. If B is a (R,S)-bimodule, then ${\displaystyle A\otimes _{R}B}$ is a right S-module where ${\displaystyle (a\otimes b)s:=a\otimes (bs).}$
3. If A is a (S,R)-bimodule and B is a (R,T)-bimodule, then ${\displaystyle A\otimes _{R}B}$ is a (S,T)-bimodule, where the left and right actions are defined in the same way as the previous two examples.
4. If R is a commutative ring, then A and B are (R,R)-bimodules where ${\displaystyle ra:=ar}$ and ${\displaystyle br:=rb.}$ By 3), we can conclude ${\displaystyle A\otimes _{R}B}$ is a (R,R)-bimodule.

### Computing the tensor product

For vector spaces, the tensor product ${\displaystyle V\otimes W}$ is quickly computed since bases of V of W immediately determine a basis of ${\displaystyle V\otimes W,}$ as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, Z/nZ is not a free abelian group (Z-module). The tensor product with Z/nZ is given by

${\displaystyle M\otimes _{\mathbf {Z} }\mathbf {Z} /n\mathbf {Z} =M/nM.}$

More generally, given a presentation of some R-module M, that is, a number of generators ${\displaystyle m_{i}\in M,i\in I}$ together with relations

${\displaystyle \sum _{j\in J}a_{ji}m_{i}=0,\qquad a_{ij}\in R,}$

the tensor product can be computed as the following cokernel:

${\displaystyle M\otimes _{R}N=\operatorname {coker} \left(N^{J}\to N^{I}\right)}$

Here ${\displaystyle N^{J}=\oplus _{j\in J}N,}$ and the map ${\displaystyle N^{J}\to N^{I}}$ is determined by sending some ${\displaystyle n\in N}$ in the jth copy of ${\displaystyle N^{J}}$ to ${\displaystyle a_{ij}n}$ (in ${\displaystyle N^{I}}$). Colloquially, this may be rephrased by saying that a presentation of M gives rise to a presentation of ${\displaystyle M\oplus _{R}N.}$ This is referred to by saying that the tensor product is a right exact functor. It is not in general left exact, that is, given an injective map of R-modules ${\displaystyle M_{1}\to M_{2},}$ the tensor product

${\displaystyle M_{1}\otimes _{R}N\to M_{2}\otimes _{R}N}$

is not usually injective. For example, tensoring the (injective) map given by multiplication with n, n : ZZ with Z/nZ yields the zero map 0 : Z/nZZ/nZ, which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product.

## Tensor product of algebras

Let R be a commutative ring. The tensor product of R-modules applies, in particular, if A and B are R-algebras. In this case, the tensor product ${\displaystyle A\otimes _{R}B}$ is an R-algebra itself by putting

${\displaystyle (a_{1}\otimes b_{1})\cdot (a_{2}\otimes b_{2})=(a_{1}\cdot a_{2})\otimes (b_{1}\cdot b_{2}).}$

For example,

${\displaystyle R[x]\otimes _{R}R[y]\cong R[x,y].}$

A particular example is when A and B are fields containing a common subfield R. The tensor product of fields is closely related to Galois theory: if, say, A = R[x] / f(x), where f is some irreducible polynomial with coefficients in R, the tensor product can be calculated as

${\displaystyle A\otimes _{R}B\cong B[x]/f(x)}$

where now f is interpreted as the same polynomial, but with its coefficients regarded as elements of B. In the larger field B, the polynomial may become reducible, which brings in Galois theory. For example, if A = B is a Galois extension of R, then

${\displaystyle A\otimes _{R}A\cong A[x]/f(x)}$

is isomorphic (as an A-algebra) to the ${\displaystyle A^{\operatorname {deg} (f)}.}$

## Eigenconfigurations of tensors

Square matrices ${\displaystyle A}$ with entries in a field ${\displaystyle K}$ represent linear maps of vector spaces, say ${\displaystyle K^{n}\to K^{n},}$ and thus linear maps ${\displaystyle \psi$ :\mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}} of projective spaces over ${\displaystyle K.}$ If ${\displaystyle A}$ is nonsingular then ${\displaystyle \psi }$ is well-defined everywhere, and the eigenvectors of ${\displaystyle A}$ correspond to the fixed points of ${\displaystyle \psi .}$ The eigenconfiguration of ${\displaystyle A}$ consists of ${\displaystyle n}$ points in ${\displaystyle \mathbb {P} ^{n-1},}$ provided ${\displaystyle A}$ is generic and ${\displaystyle K}$ is algebraically closed. The fixed points of nonlinear maps are the eigenvectors of tensors. Let ${\displaystyle A=(a_{i_{1}i_{2}\cdots i_{d}})}$ be a ${\displaystyle d}$-dimensional tensor of format ${\displaystyle n\times n\times \cdots \times n}$ with entries ${\displaystyle (a_{i_{1}i_{2}\cdots i_{d}})}$ lying in an algebraically closed field ${\displaystyle K}$ of characteristic zero. Such a tensor ${\displaystyle A\in (K^{n})^{\otimes d}}$ defines polynomial maps ${\displaystyle K^{n}\to K^{n}}$ and ${\displaystyle \mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}}$ with coordinates

${\displaystyle \psi _{i}(x_{1},\ldots ,x_{n})=\sum _{j_{2}=1}^{n}\sum _{j_{3}=1}^{n}\cdots \sum _{j_{d}=1}^{n}a_{ij_{2}j_{3}\cdots j_{d}}x_{j_{2}}x_{j_{3}}\cdots x_{j_{d}}\;\;{\mbox{for }}i=1,\ldots ,n}$

Thus each of the ${\displaystyle n}$ coordinates of ${\displaystyle \psi }$ is a homogeneous polynomial ${\displaystyle \psi _{i}}$ of degree ${\displaystyle d-1}$ in ${\displaystyle \mathbf {x} =\left(x_{1},\ldots ,x_{n}\right).}$ The eigenvectors of ${\displaystyle A}$ are the solutions of the constraint

${\displaystyle {\mbox{rank}}{\begin{pmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\psi _{1}(\mathbf {x} )&\psi _{2}(\mathbf {x} )&\cdots &\psi _{n}(\mathbf {x} )\end{pmatrix}}\leq 1}$

and the eigenconfiguration is given by the variety of the ${\displaystyle 2\times 2}$ minors of this matrix. [11]

## Other examples of tensor products

### Tensor product of Hilbert spaces

Hilbert spaces generalize finite-dimensional vector spaces to countably-infinite dimensions. The tensor product is still defined; it is the tensor product of Hilbert spaces.

### Topological tensor product

When the basis for a vector space is no longer countable, then the appropriate axiomatic formalization for the vector space is that of a topological vector space. The tensor product is still defined, it is the topological tensor product.

### Tensor product of graded vector spaces

Some vector spaces can be decomposed into direct sums of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition).

### Tensor product of representations

Vector spaces endowed with an additional multiplicative structure are called algebras. The tensor product of such algebras is described by the Littlewood–Richardson rule.

### Tensor product of multilinear forms

Given two multilinear forms ${\displaystyle f(x_{1},\dots ,x_{k})}$ and ${\displaystyle g(x_{1},\dots ,x_{m})}$ on a vector space ${\displaystyle V}$ over the field ${\displaystyle K}$ their tensor product is the multilinear form

${\displaystyle (f\otimes g)(x_{1},\dots ,x_{k+m})=f(x_{1},\dots ,x_{k})g(x_{k+1},\dots ,x_{k+m}).}$

[12]

This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product.

### Tensor product of graphs

It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above.

### Monoidal categories

The most general setting for the tensor product is the monoidal category. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects.

## Quotient algebras

A number of important subspaces of the tensor algebra can be constructed as quotients: these include the exterior algebra, the symmetric algebra, the Clifford algebra, the Weyl algebra, and the universal enveloping algebra in general.

The exterior algebra is constructed from the exterior product. Given a vector space V, the exterior product ${\displaystyle V\wedge V}$ is defined as

${\displaystyle V\wedge V:=V\otimes V/\{v\otimes v\mid v\in V\}.}$

Note that when the underlying field of V does not have characteristic 2, then this definition is equivalent to

${\displaystyle V\wedge V:=V\otimes V/\{v_{1}\otimes v_{2}+v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}\}.}$

The image of ${\displaystyle v_{1}\otimes v_{2}}$ in the exterior product is usually denoted ${\displaystyle v_{1}\wedge v_{2}}$ and satisfies, by construction, ${\displaystyle v_{1}\wedge v_{2}=-v_{2}\wedge v_{1}.}$ Similar constructions are possible for ${\displaystyle V\otimes \dots \otimes V}$ (n factors), giving rise to ${\displaystyle \Lambda ^{n}V,}$ the nth exterior power of V. The latter notion is the basis of differential n-forms.

The symmetric algebra is constructed in a similar manner, from the symmetric product

${\displaystyle V\odot V:=V\otimes V/\{v_{1}\otimes v_{2}-v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}\}.}$

More generally

${\displaystyle \operatorname {Sym} ^{n}V:=\underbrace {V\otimes \dots \otimes V} _{n}/(\dots \otimes v_{i}\otimes v_{i+1}\otimes \dots -\dots \otimes v_{i+1}\otimes v_{i}\otimes \dots )}$

That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called symmetric tensors.

## Tensor product in programming

### Array programming languages

Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c).

Note that J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable.

However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL).

## Notes

1. This similar to how the engineering use of "${\displaystyle ({\bmod {n}})}$" specifically returns the remainder, one of the many elements of the ${\displaystyle ({\bmod {n}})}$ equivalence class.
2. Hazewinkel, Michiel; Gubareni, Nadezhda Mikhaĭlovna; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Springer. p. 100. ISBN   978-1-4020-2690-4.
3. "Archived copy". Archived from the original on 2017-09-02. Retrieved 2017-09-02.CS1 maint: archived copy as title (link)[ user-generated source ]
4. Bourbaki (1989), p. 244 defines the usage "tensor product of x and y", elements of the respective modules.
5. Analogous formulas also hold for contravariant tensors, as well as tensors of mixed variance. Although in many cases such as when there is an inner product defined, the distinction is irrelevant.
6. "The Coevaluation on Vector Spaces". The Unapologetic Mathematician. 2008-11-13. Archived from the original on 2017-02-02. Retrieved 2017-01-26.
7. Hungerford, Thomas W. (1974). Algebra. Springer. ISBN   0-387-90518-9.
8. Chen, Jungkai Alfred (Spring 2004), "Tensor product" (PDF), Advanced Algebra II (lecture notes), National Taiwan University, archived (PDF) from the original on 2016-03-04
9. Abo, H.; Seigal, A.; Sturmfels, B. (2015). "Eigenconfigurations of Tensors". arXiv: [math.AG].
10. Tu, L. W. (2010). An Introduction to Manifolds. Universitext. Springer. p. 25. ISBN   978-1-4419-7399-3.

## Related Research Articles

In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field whose value at a point is the vector whose components are the partial derivatives of at . That is, for , its gradient is defined at the point in n-dimensional space as the vector:

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

Linear algebra is the branch of mathematics concerning linear equations such as:

In mathematics, a product is the result of multiplication, or an expression that identifies factors to be multiplied. For example, 30 is the product of 6 and 5, and is the product of and .

In mathematics, a tensor is an algebraic object that describes a (multilinear) relationship between sets of algebraic objects related to a vector space. Objects that tensors may map between include vectors and scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A.

In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

In linear algebra, a column vector is a column of entries, for example,

In mathematics, the cross product or vector product is a binary operation on two vectors in three-dimensional space , and is denoted by the symbol . Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product.

In the mathematical field of differential geometry, one definition of a metric tensor is a type of function which takes as input a pair of tangent vectors v and w at a point of a surface and produces a real number scalar g(v, w) in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of and angle between tangent vectors. Through integration, the metric tensor allows one to define and compute the length of curves on the manifold.

In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the natural pairing of a finite-dimensional vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices of the tensor are set equal to each other and summed over. In the Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2.

In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors and , denoted by , is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of can be interpreted as the area of the parallelogram with sides and , which in three dimensions can also be computed using the cross product of the two vectors. More generally, all parallel plane surfaces with the same orientation and area have the same bivector as a measure of their oriented area. Like the cross product, the exterior product is anticommutative, meaning that for all vectors and , but, unlike the cross product, the exterior product is associative.

In mathematics, a monoidal category is a category equipped with a bifunctor

In mathematics, a universal enveloping algebra is the most general algebra that contains all representations of a Lie algebra.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved.

In abstract algebra and multilinear algebra, a multilinear form on a vector space over a field is a map

In geometry and algebra, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

The theory of quantum error correction plays a prominent role in the practical realization and engineering of quantum computing and quantum communication devices. The first quantum error-correcting codes are strikingly similar to classical block codes in their operation and performance. Quantum error-correcting codes restore a noisy, decohered quantum state to a pure quantum state. A stabilizer quantum error-correcting code appends ancilla qubits to qubits that we want to protect. A unitary encoding circuit rotates the global state into a subspace of a larger Hilbert space. This highly entangled, encoded state corrects for local noisy errors. A quantum error-correcting code makes quantum computation and quantum communication practical by providing a way for a sender and receiver to simulate a noiseless qubit channel given a noisy qubit channel whose noise conforms to a particular error model.