# Dot product

Last updated

In mathematics, the dot product or scalar product [note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called "the" inner product (or rarely projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more).

## Contents

Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle of two vectors is the quotient of their dot product by the product of their lengths).

The name "dot product" is derived from the centered dot " · ", that is often used to designate this operation; [1] [2] the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector, as is the case for the vector product in three-dimensional space.

## Definition

The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude of vectors). The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space.

In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space Rn. In such a presentation, the notions of length and angles are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle of two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.

### Algebraic definition

The dot product of two vectors a = [a1, a2, …, an] and b = [b1, b2, …, bn] is defined as: [3]

${\displaystyle \mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} =\sum _{i=1}^{n}{\color {red}a}_{i}{\color {blue}b}_{i}={\color {red}a}_{1}{\color {blue}b}_{1}+{\color {red}a}_{2}{\color {blue}b}_{2}+\cdots +{\color {red}a}_{n}{\color {blue}b}_{n}}$

where Σ denotes summation and n is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors [1, 3, −5] and [4, −2, −1] is:

{\displaystyle {\begin{aligned}\ [{\color {red}1,3,-5}]\cdot [{\color {blue}4,-2,-1}]&=({\color {red}1}\times {\color {blue}4})+({\color {red}3}\times {\color {blue}-2})+({\color {red}-5}\times {\color {blue}-1})\\&=4-6+5\\&=3\end{aligned}}}

If vectors are identified with row matrices, the dot product can also be written as a matrix product

${\displaystyle \mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} =\mathbf {\color {red}a} \mathbf {\color {blue}b} ^{\mathsf {T}},}$

where ${\displaystyle \mathbf {\color {blue}b} ^{\mathsf {T}}}$ denotes the transpose of ${\displaystyle \mathbf {\color {blue}b} }$.

Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry:

${\displaystyle {\begin{bmatrix}\color {red}1&\color {red}3&\color {red}-5\end{bmatrix}}{\begin{bmatrix}\color {blue}4\\\color {blue}-2\\\color {blue}-1\end{bmatrix}}=\color {purple}3}$.

### Geometric definition

In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector a is denoted by ${\displaystyle \left\|\mathbf {a} \right\|}$. The dot product of two Euclidean vectors a and b is defined by [4] [5] [2]

${\displaystyle \mathbf {a} \cdot \mathbf {b} =\|\mathbf {a} \|\ \|\mathbf {b} \|\cos \theta ,}$

where θ is the angle between a and b.

In particular, if the vectors a and b are orthogonal (i.e., their angle is π / 2 or 90°), then ${\displaystyle \cos {\frac {\pi }{2}}=0}$, which implies that

${\displaystyle \mathbf {a} \cdot \mathbf {b} =0.}$

At the other extreme, if they are codirectional, then the angle between them is zero with ${\displaystyle \cos 0=1}$ and

${\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\,\left\|\mathbf {b} \right\|}$

This implies that the dot product of a vector a with itself is

${\displaystyle \mathbf {a} \cdot \mathbf {a} =\left\|\mathbf {a} \right\|^{2},}$

which gives

${\displaystyle \left\|\mathbf {a} \right\|={\sqrt {\mathbf {a} \cdot \mathbf {a} }},}$

the formula for the Euclidean length of the vector.

### Scalar projection and first properties

The scalar projection (or scalar component) of a Euclidean vector a in the direction of a Euclidean vector b is given by

${\displaystyle a_{b}=\left\|\mathbf {a} \right\|\cos \theta ,}$

where θ is the angle between a and b.

In terms of the geometric definition of the dot product, this can be rewritten

${\displaystyle a_{b}=\mathbf {a} \cdot {\widehat {\mathbf {b} }},}$

where ${\displaystyle {\widehat {\mathbf {b} }}=\mathbf {b} /\left\|\mathbf {b} \right\|}$ is the unit vector in the direction of b.

The dot product is thus characterized geometrically by [6]

${\displaystyle \mathbf {a} \cdot \mathbf {b} =a_{b}\left\|\mathbf {b} \right\|=b_{a}\left\|\mathbf {a} \right\|.}$

The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar α,

${\displaystyle (\alpha \mathbf {a} )\cdot \mathbf {b} =\alpha (\mathbf {a} \cdot \mathbf {b} )=\mathbf {a} \cdot (\alpha \mathbf {b} ).}$

It also satisfies a distributive law, meaning that

${\displaystyle \mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} .}$

These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that ${\displaystyle \mathbf {a} \cdot \mathbf {a} }$ is never negative, and is zero if and only if ${\displaystyle \mathbf {a} =\mathbf {0} }$—the zero vector.

The dot product is thus equivalent to multiplying the norm (length) of b by the norm of the projection of a over b.

### Equivalence of the definitions

If e1, ..., en are the standard basis vectors in Rn, then we may write

{\displaystyle {\begin{aligned}\mathbf {a} &=[a_{1},\dots ,a_{n}]=\sum _{i}a_{i}\mathbf {e} _{i}\\\mathbf {b} &=[b_{1},\dots ,b_{n}]=\sum _{i}b_{i}\mathbf {e} _{i}.\end{aligned}}}

The vectors ei are an orthonormal basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length

${\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{i}=1}$

and since they form right angles with each other, if ij,

${\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=0.}$

Thus in general, we can say that:

${\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}.}$

Where δ ij is the Kronecker delta.

Also, by the geometric definition, for any vector ei and a vector a, we note

${\displaystyle \mathbf {a} \cdot \mathbf {e} _{i}=\left\|\mathbf {a} \right\|\,\left\|\mathbf {e} _{i}\right\|\cos \theta _{i}=\left\|\mathbf {a} \right\|\cos \theta _{i}=a_{i},}$

where ai is the component of vector a in the direction of ei. The last step in the equality can be seen from the figure.

Now applying the distributivity of the geometric version of the dot product gives

${\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {a} \cdot \sum _{i}b_{i}\mathbf {e} _{i}=\sum _{i}b_{i}(\mathbf {a} \cdot \mathbf {e} _{i})=\sum _{i}b_{i}a_{i}=\sum _{i}a_{i}b_{i},}$

which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.

## Properties

The dot product fulfills the following properties if a, b, and c are real vectors and r is a scalar. [3] [4]

1. Commutative:
${\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {b} \cdot \mathbf {a} ,}$
which follows from the definition (θ is the angle between a and b): [7]
${\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta =\left\|\mathbf {b} \right\|\left\|\mathbf {a} \right\|\cos \theta =\mathbf {b} \cdot \mathbf {a} .}$
${\displaystyle \mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} .}$
3. Bilinear :
${\displaystyle \mathbf {a} \cdot (r\mathbf {b} +\mathbf {c} )=r(\mathbf {a} \cdot \mathbf {b} )+(\mathbf {a} \cdot \mathbf {c} ).}$
4. Scalar multiplication:
${\displaystyle (c_{1}\mathbf {a} )\cdot (c_{2}\mathbf {b} )=c_{1}c_{2}(\mathbf {a} \cdot \mathbf {b} ).}$
5. Not associative because the dot product between a scalar (a ⋅ b) and a vector (c) is not defined, which means that the expressions involved in the associative property, (a ⋅ b) ⋅ c or a ⋅ (b ⋅ c) are both ill-defined. [8] Note however that the previously mentioned scalar multiplication property is sometimes called the "associative law for scalar and dot product" [9] or one can say that "the dot product is associative with respect to scalar multiplication" because c (ab) = (ca) ⋅ b = a ⋅ (cb). [10]
6. Orthogonal:
Two non-zero vectors a and b are orthogonal if and only if ab = 0.
7. No cancellation:
Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law:
If ab = ac and a0, then we can write: a ⋅ (bc) = 0 by the distributive law; the result above says this just means that a is perpendicular to (bc), which still allows (bc) ≠ 0, and therefore allows bc.
8. Product Rule:
If a and b are (vector-valued) differentiable functions, then the derivative (denoted by a prime ′) of ab is given by the rule (ab)′ = a′ ⋅ b + ab.

### Application to the law of cosines

Given two vectors a and b separated by angle θ (see image right), they form a triangle with a third side c = ab. The dot product of this with itself is:

{\displaystyle {\begin{aligned}\mathbf {\color {orange}c} \cdot \mathbf {\color {orange}c} &=(\mathbf {\color {red}a} -\mathbf {\color {blue}b} )\cdot (\mathbf {\color {red}a} -\mathbf {\color {blue}b} )\\&=\mathbf {\color {red}a} \cdot \mathbf {\color {red}a} -\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} -\mathbf {\color {blue}b} \cdot \mathbf {\color {red}a} +\mathbf {\color {blue}b} \cdot \mathbf {\color {blue}b} \\&=\mathbf {\color {red}a} ^{2}-\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} -\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} +\mathbf {\color {blue}b} ^{2}\\&=\mathbf {\color {red}a} ^{2}-2\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} +\mathbf {\color {blue}b} ^{2}\\\mathbf {\color {orange}c} ^{2}&=\mathbf {\color {red}a} ^{2}+\mathbf {\color {blue}b} ^{2}-2\mathbf {\color {red}a} \mathbf {\color {blue}b} \cos \mathbf {\color {purple}\theta } \\\end{aligned}}}

which is the law of cosines.

## Triple product

There are two ternary operations involving dot product and cross product.

The scalar triple product of three vectors is defined as

${\displaystyle \mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )=\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )=\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} ).}$

Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the Parallelepiped defined by the three vectors.

The vector triple product is defined by [3] [4]

${\displaystyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=\mathbf {b} (\mathbf {a} \cdot \mathbf {c} )-\mathbf {c} (\mathbf {a} \cdot \mathbf {b} ).}$

This identity, also known as Lagrange's formula, may be remembered as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics.

## Physics

In physics, vector magnitude is a scalar in the physical sense (i.e., a physical quantity independent of the coordinate system), expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. For example: [11] [12]

## Generalizations

### Complex vectors

For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself would be an arbitrary complex number, and could be zero without the vector being the zero vector (such vectors are called isotropic); this in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the scalar product, through the alternative definition [13] [3]

${\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum {{a_{i}}\,{\overline {b_{i}}}},}$

where ${\displaystyle {\overline {b_{i}}}}$ is the complex conjugate of ${\displaystyle b_{i}}$. When vectors are represented by row vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H:

${\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {b} ^{\mathsf {H}}\mathbf {a} .}$

In the case of vectors with real components, this definition is the same as in the real case. The scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex scalar product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in a. The scalar product is not symmetric, since

${\displaystyle \mathbf {a} \cdot \mathbf {b} ={\overline {\mathbf {b} \cdot \mathbf {a} }}.}$

The angle between two complex vectors is then given by

${\displaystyle \cos \theta ={\frac {\operatorname {Re} (\mathbf {a} \cdot \mathbf {b} )}{\left\|\mathbf {a} \right\|\,\left\|\mathbf {b} \right\|}}.}$

The complex scalar product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics.

The self dot product of a complex vector ${\displaystyle \mathbf {a} \cdot \mathbf {a} }$ is a generalization of the absolute square of a complex number.

### Inner product

The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers ${\displaystyle \mathbb {R} }$ or the field of complex numbers ${\displaystyle \mathbb {C} }$. It is usually denoted using angular brackets by ${\displaystyle \left\langle \mathbf {a} \,,\mathbf {b} \right\rangle }$. [1]

The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.

### Functions

The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length-n vector u is, then, a function with domain {k ∈ ℕ ∣ 1 ≤ kn}, and ui is a notation for the image of i by the function/vector u.

This notion can be generalized to continuous functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval axb (also denoted [a, b]): [3]

${\displaystyle \left\langle u,v\right\rangle =\int _{a}^{b}u(x)v(x)dx}$

Generalized further to complex functions ψ(x) and χ(x), by analogy with the complex inner product above, gives [3]

${\displaystyle \left\langle \psi ,\chi \right\rangle =\int _{a}^{b}\psi (x){\overline {\chi (x)}}dx.}$

### Weight function

Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions ${\displaystyle u(x)}$ and ${\displaystyle v(x)}$ with respect to the weight function ${\displaystyle r(x)>0}$ is

${\displaystyle \left\langle u,v\right\rangle =\int _{a}^{b}r(x)u(x)v(x)dx.}$

Matrices have the Frobenius inner product, which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices A and B having the same size:

${\displaystyle \mathbf {A}$ :\mathbf {B} =\sum _{i}\sum _{j}A_{ij}{\overline {B_{ij}}}=\mathrm {tr} (\mathbf {B} ^{\mathrm {H} }\mathbf {A} )=\mathrm {tr} (\mathbf {A} \mathbf {B} ^{\mathrm {H} }).}
${\displaystyle \mathbf {A}$ :\mathbf {B} =\sum _{i}\sum _{j}A_{ij}B_{ij}=\mathrm {tr} (\mathbf {B} ^{\mathrm {T} }\mathbf {A} )=\mathrm {tr} (\mathbf {A} \mathbf {B} ^{\mathrm {T} })=\mathrm {tr} (\mathbf {A} ^{\mathrm {T} }\mathbf {B} )=\mathrm {tr} (\mathbf {B} \mathbf {A} ^{\mathrm {T} }).} (For real matrices)

Dyadics have a dot product and "double" dot product defined on them, see Dyadics § Product of dyadic and dyadic for their definitions.

### Tensors

The inner product between a tensor of order n and a tensor of order m is a tensor of order n + m − 2, see Tensor contraction for details.

## Computation

### Algorithms

The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used.

### Libraries

A dot product function is included in BLAS level 1.

## Notes

1. The term scalar product is often also used more generally to mean a symmetric bilinear form, for example for a pseudo-Euclidean space.[ citation needed ]

## Related Research Articles

In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field whose value at a point is the vector whose components are the partial derivatives of at . That is, for , its gradient is defined at the point in n-dimensional space as the vector:

In mathematics, physics and engineering, a Euclidean vector or simply a vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a ray, or graphically as an arrow connecting an initial pointA with a terminal pointB, and denoted by .

Kinematics is a subfield of physics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics.

In mathematics, the cross product or vector product is a binary operation on two vectors in three-dimensional space , and is denoted by the symbol . Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf(p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f(p).

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

In the mathematical field of differential geometry, one definition of a metric tensor is a type of function which takes as input a pair of tangent vectors v and w at a point of a surface and produces a real number scalar g(v, w) in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of and angle between tangent vectors. Through integration, the metric tensor allows one to define and compute the length of curves on the manifold.

Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.

The vector projection of a vector a on a nonzero vector b, sometimes denoted , is the orthogonal projection of a onto a straight line parallel to b. It is a vector parallel to b, defined as:

In mathematics, the scalar projection of a vector on a vector , also known as the scalar resolute of in the direction of , is given by:

In the theory of three-dimensional rotation, Rodrigues' rotation formula, named after Olinde Rodrigues, is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transform all three basis vectors to compute a rotation matrix in SO(3), the group of all rotation matrices, from an axis–angle representation. In other words, the Rodrigues' formula provides an algorithm to compute the exponential map from so(3), the Lie algebra of SO(3), to SO(3) without actually computing the full matrix exponential.

Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the informal meaning of the term dimension.

In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be geometric vectors, or, more generally, members of a vector space.

In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.

In mathematics, the axis–angle representation of a rotation parameterizes a rotation in a three-dimensional Euclidean space by two quantities: a unit vector e indicating the direction of an axis of rotation, and an angle θ describing the magnitude of the rotation about the axis. Only two numbers, not three, are needed to define the direction of a unit vector e rooted at the origin because the magnitude of e is constrained. For example, the elevation and azimuth angles of e suffice to locate it in any particular Cartesian coordinate frame.

In the differential geometry of surfaces, a Darboux frame is a natural moving frame constructed on a surface. It is the analog of the Frenet–Serret frame as applied to surface geometry. A Darboux frame exists at any non-umbilic point of a surface embedded in Euclidean space. It is named after French mathematician Jean Gaston Darboux.

Two-dimensional space is a geometric setting in which two values are required to determine the position of an element. The set 2 of pairs of real numbers with appropriate structure often serves as the canonical example of a two-dimensional Euclidean space. For a generalization of the concept, see dimension.

In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra.

The following are important identities in vector algebra. Identities that involve the magnitude of a vector , or the dot product of two vectors A·B, apply to vectors in any dimension. Identities that use the cross product A×B are defined only in three dimensions.

In mathematics, the quadruple product is a product of four vectors in three-dimensional Euclidean space. The name "quadruple product" is used for two different products, the scalar-valued scalar quadruple product and the vector-valued vector quadruple product or vector product of four vectors .

## References

1. "Comprehensive List of Algebra Symbols". Math Vault. 2020-03-25. Retrieved 2020-09-06.
2. "Dot Product". www.mathsisfun.com. Retrieved 2020-09-06.
3. S. Lipschutz; M. Lipson (2009). Linear Algebra (Schaum's Outlines) (4th ed.). McGraw Hill. ISBN   978-0-07-154352-1.
4. M.R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis (Schaum's Outlines) (2nd ed.). McGraw Hill. ISBN   978-0-07-161545-7.
5. A I Borisenko; I E Taparov (1968). Vector and tensor analysis with applications. Translated by Richard Silverman. Dover. p. 14.
6. Arfken, G. B.; Weber, H. J. (2000). Mathematical Methods for Physicists (5th ed.). Boston, MA: Academic Press. pp. 14–15. ISBN   978-0-12-059825-0..
7. Nykamp, Duane. "The dot product". Math Insight. Retrieved September 6, 2020.
8. Weisstein, Eric W. "Dot Product." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DotProduct.html
9. T. Banchoff; J. Wermer (1983). Linear Algebra Through Geometry. Springer Science & Business Media. p. 12. ISBN   978-1-4684-0161-5.
10. A. Bedford; Wallace L. Fowler (2008). Engineering Mechanics: Statics (5th ed.). Prentice Hall. p. 60. ISBN   978-0-13-612915-8.
11. K.F. Riley; M.P. Hobson; S.J. Bence (2010). (3rd ed.). Cambridge University Press. ISBN   978-0-521-86153-3.
12. M. Mansfield; C. O’Sullivan (2011). Understanding Physics (4th ed.). John Wiley & Sons. ISBN   978-0-47-0746370.
13. Berberian, Sterling K. (2014) [1992], Linear Algebra, Dover, p. 287, ISBN   978-0-486-78055-9