Dot product

Last updated

In mathematics, the dot product or scalar product [note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product (or rarely projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more).

Contents

Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their dot product by the product of their lengths).

The name "dot product" is derived from the dot operator " · " that is often used to designate this operation; [1] the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector (as with the vector product in three-dimensional space).

Definition

The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space.

In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space . In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.

Coordinate definition

The dot product of two vectors and , specified with respect to an orthonormal basis, is defined as: [2]

where denotes summation and is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors and is:

Likewise, the dot product of the vector with itself is:

If vectors are identified with column vectors, the dot product can also be written as a matrix product

where denotes the transpose of .

Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry:

Geometric definition

Illustration showing how to find the angle between vectors using the dot product Inner-product-angle.svg
Illustration showing how to find the angle between vectors using the dot product
Calculating bond angles of a symmetrical tetrahedral molecular geometry using a dot product Tetrahedral angle calculation.svg
Calculating bond angles of a symmetrical tetrahedral molecular geometry using a dot product

In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector is denoted by . The dot product of two Euclidean vectors and is defined by [3] [4] [1]

where is the angle between and .

In particular, if the vectors and are orthogonal (i.e., their angle is or ), then , which implies that

At the other extreme, if they are codirectional, then the angle between them is zero with and

This implies that the dot product of a vector with itself is

which gives

the formula for the Euclidean length of the vector.

Scalar projection and first properties

Scalar projection Dot Product.svg
Scalar projection

The scalar projection (or scalar component) of a Euclidean vector in the direction of a Euclidean vector is given by

where is the angle between and .

In terms of the geometric definition of the dot product, this can be rewritten as

where is the unit vector in the direction of .

Distributive law for the dot product Dot product distributive law.svg
Distributive law for the dot product

The dot product is thus characterized geometrically by [5]

The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar ,

It also satisfies the distributive law, meaning that

These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that is never negative, and is zero if and only if , the zero vector.

Equivalence of the definitions

If are the standard basis vectors in , then we may write

The vectors are an orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length,

and since they form right angles with each other, if ,

Thus in general, we can say that:

where is the Kronecker delta.

Vector components in an orthonormal basis Wiki dot.png
Vector components in an orthonormal basis

Also, by the geometric definition, for any vector and a vector , we note that

where is the component of vector in the direction of . The last step in the equality can be seen from the figure.

Now applying the distributivity of the geometric version of the dot product gives

which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.

Properties

The dot product fulfills the following properties if , , and are real vectors and , and are scalars. [2] [3]

Commutative
which follows from the definition ( is the angle between and ): [6]
Distributive over vector addition
Bilinear
Scalar multiplication
Not associative
because the dot product between a scalar and a vector is not defined, which means that the expressions involved in the associative property, or are both ill-defined. [7] Note however that the previously mentioned scalar multiplication property is sometimes called the "associative law for scalar and dot product" [8] or one can say that "the dot product is associative with respect to scalar multiplication" because . [9]
Orthogonal
Two non-zero vectors and are orthogonal if and only if .
No cancellation
Unlike multiplication of ordinary numbers, where if , then always equals unless is zero, the dot product does not obey the cancellation law:
If and , then we can write: by the distributive law; the result above says this just means that is perpendicular to , which still allows , and therefore allows .
Product rule
If and are vector-valued differentiable functions, then the derivative (denoted by a prime ) of is given by the rule

Application to the law of cosines

Triangle with vector edges a and b, separated by angle th. Dot product cosine rule.svg
Triangle with vector edges a and b, separated by angle θ.

Given two vectors and separated by angle (see image right), they form a triangle with a third side . Let , and denote the lengths of , , and , respectively. The dot product of this with itself is:

which is the law of cosines.

Triple product

There are two ternary operations involving dot product and cross product.

The scalar triple product of three vectors is defined as

Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors.

The vector triple product is defined by [2] [3]

This identity, also known as Lagrange's formula, may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics.

Physics

In physics, vector magnitude is a scalar in the physical sense (i.e., a physical quantity independent of the coordinate system), expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. For example: [10] [11]

Generalizations

Complex vectors

For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector ). This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition [12] [2]

where is the complex conjugate of . When vectors are represented by column vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H:

In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in . The dot product is not symmetric, since

The angle between two complex vectors is then given by

The complex dot product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics.

The self dot product of a complex vector , involving the conjugate transpose of a row vector, is also known as the norm squared, , after the Euclidean norm; it is a vector generalization of the absolute square of a complex scalar (see also: squared Euclidean distance).

Inner product

The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers or the field of complex numbers . It is usually denoted using angular brackets by .

The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.

Functions

The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length- vector is, then, a function with domain , and is a notation for the image of by the function/vector .

This notion can be generalized to continuous functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval [a, b]: [2]

Generalized further to complex functions and , by analogy with the complex inner product above, gives [2]

Weight function

Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions and with respect to the weight function is

Dyadics and matrices

A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices and of the same size:

And for real matrices,

Writing a matrix as a dyadic, we can define a different double-dot product (see Dyadics § Product of dyadic and dyadic) however it is not an inner product.

Tensors

The inner product between a tensor of order and a tensor of order is a tensor of order , see Tensor contraction for details.

Computation

Algorithms

The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used.

Libraries

A dot product function is included in:

See also

Notes

  1. The term scalar product means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space.

Related Research Articles

<span class="mw-page-title-main">Parallelepiped</span> Hexahedron with parallelogram faces

In geometry, a parallelepiped is a three-dimensional figure formed by six parallelograms. By analogy, it relates to a parallelogram just as a cube relates to a square.

<span class="mw-page-title-main">Euclidean vector</span> Geometric object that has length and direction

In mathematics, physics, and engineering, a Euclidean vector or simply a vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a directed line segment, or graphically as an arrow connecting an initial pointA with a terminal pointB, and denoted by

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Cross product</span> Mathematical operation on vectors in 3D space

In mathematics, the cross product or vector product is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space, and is denoted by the symbol . Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites, and crystallographic texture analysis.

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric tensor on M consists of a metric tensor at each point p of M that varies smoothly with p.

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

Screw theory is the algebraic calculation of pairs of vectors, such as angular and linear velocity, or forces and moments, that arise in the kinematics and dynamics of rigid bodies.

<span class="mw-page-title-main">Vector projection</span> Concept in linear algebra

The vector projection of a vector a on a nonzero vector b is the orthogonal projection of a onto a straight line parallel to b. The projection of a onto b is often written as or ab.

<span class="mw-page-title-main">Cartesian tensor</span>

In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.

<span class="mw-page-title-main">Vector notation</span> Use of coordinates in for representing vectors

In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space.

In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.

In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in .

<span class="mw-page-title-main">Axis–angle representation</span> Parameterization of a rotation into a unit vector and angle

In mathematics, the axis–angle representation parameterizes a rotation in a three-dimensional Euclidean space by two quantities: a unit vector e indicating the direction (geometry) of an axis of rotation, and an angle of rotation θ describing the magnitude and sense of the rotation about the axis. Only two numbers, not three, are needed to define the direction of a unit vector e rooted at the origin because the magnitude of e is constrained. For example, the elevation and azimuth angles of e suffice to locate it in any particular Cartesian coordinate frame.

In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra.

<span class="mw-page-title-main">Pythagorean theorem</span> Relation between sides of a right triangle

In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse is equal to the sum of the areas of the squares on the other two sides.

The following are important identities in vector algebra. Identities that involve the magnitude of a vector , or the dot product of two vectors A·B, apply to vectors in any dimension. Identities that use the cross product A×B are defined only in three dimensions. Most of these relations can be dated to Josiah Willard Gibbs, founder of vector calculus, if not earlier.

The concept of angles between lines, between two planes or between a line and a plane can be generalized to arbitrary dimensions. This generalization was first discussed by Camille Jordan. For any pair of flats in a Euclidean space of arbitrary dimension one can define a set of mutual angles which are invariant under isometric transformation of the Euclidean space. If the flats do not intersect, their shortest distance is one more invariant. These angles are called canonical or principal. The concept of angles can be generalized to pairs of flats in a finite-dimensional inner product space over the complex numbers.

References

  1. 1 2 "Dot Product". www.mathsisfun.com. Retrieved 2020-09-06.
  2. 1 2 3 4 5 6 S. Lipschutz; M. Lipson (2009). Linear Algebra (Schaum's Outlines) (4th ed.). McGraw Hill. ISBN   978-0-07-154352-1.
  3. 1 2 3 M.R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis (Schaum's Outlines) (2nd ed.). McGraw Hill. ISBN   978-0-07-161545-7.
  4. A I Borisenko; I E Taparov (1968). Vector and tensor analysis with applications. Translated by Richard Silverman. Dover. p. 14.
  5. Arfken, G. B.; Weber, H. J. (2000). Mathematical Methods for Physicists (5th ed.). Boston, MA: Academic Press. pp. 14–15. ISBN   978-0-12-059825-0.
  6. Nykamp, Duane. "The dot product". Math Insight. Retrieved September 6, 2020.
  7. Weisstein, Eric W. "Dot Product." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DotProduct.html
  8. T. Banchoff; J. Wermer (1983). Linear Algebra Through Geometry. Springer Science & Business Media. p. 12. ISBN   978-1-4684-0161-5.
  9. A. Bedford; Wallace L. Fowler (2008). Engineering Mechanics: Statics (5th ed.). Prentice Hall. p. 60. ISBN   978-0-13-612915-8.
  10. K.F. Riley; M.P. Hobson; S.J. Bence (2010). Mathematical methods for physics and engineering (3rd ed.). Cambridge University Press. ISBN   978-0-521-86153-3.
  11. M. Mansfield; C. O'Sullivan (2011). Understanding Physics (4th ed.). John Wiley & Sons. ISBN   978-0-47-0746370.
  12. Berberian, Sterling K. (2014) [1992]. Linear Algebra. Dover. p. 287. ISBN   978-0-486-78055-9.