Scalar multiplication

Last updated
Scalar multiplication of a vector by a factor of 3 stretches the vector out. Scalar multiplication by r=3.svg
Scalar multiplication of a vector by a factor of 3 stretches the vector out.
The scalar multiplications -a and 2a of a vector a Scalar multiplication of vectors2.svg
The scalar multiplications −a and 2a of a vector a

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra [1] [2] [3] (or more generally, a module in abstract algebra [4] [5] ). In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector—without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and is to be distinguished from inner product of two vectors (where the product is a scalar).

Contents

Definition

In general, if K is a field and V is a vector space over K, then scalar multiplication is a function from K×V to V. The result of applying this function to k in K and v in V is denoted kv.

Properties

Scalar multiplication obeys the following rules (vector in boldface):

Here, + is addition either in the field or in the vector space, as appropriate; and 0 is the additive identity in either. Juxtaposition indicates either scalar multiplication or the multiplication operation in the field.

Interpretation

Scalar multiplication may be viewed as an external binary operation or as an action of the field on the vector space. A geometric interpretation of scalar multiplication is that it stretches or contracts vectors by a constant factor. As a result, it produces a vector in the same or opposite direction of the original vector but of a different length. [6]

As a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply the multiplication in the field.

When V is Kn, scalar multiplication is equivalent to multiplication of each component with the scalar, and may be defined as such.

The same idea applies if K is a commutative ring and V is a module over K. K can even be a rig, but then there is no additive inverse. If K is not commutative, the distinct operations left scalar multiplicationcv and right scalar multiplicationvc may be defined.

Scalar multiplication of matrices

The left scalar multiplication of a matrix A with a scalar λ gives another matrix of the same size as A. It is denoted by λA, whose entries of λA are defined by

explicitly:

Similarly, even though there is no widely-accepted definition, the right scalar multiplication of a matrix A with a scalar λ could be defined to be

explicitly:

When the entries of the matrix and the scalars are from the same commutative field, for example, the real number field or the complex number field, these two multiplications are the same, and can be simply called scalar multiplication. For matrices over a more general field that is not commutative, they may not be equal.

For a real scalar and matrix:

For quaternion scalars and matrices:

where i, j, k are the quaternion units. The non-commutativity of quaternion multiplication prevents the transition of changing ij = +k to ji = −k.

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (which follows directly from the above properties).

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

<span class="mw-page-title-main">Matrix addition</span> Notions of sums for matrices in linear algebra

In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

<span class="mw-page-title-main">Four-vector</span> 4-dimensional vector in relativity

In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

<span class="mw-page-title-main">Matrix calculus</span> Specialized notation for multivariable calculus

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor.

In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra.

In mathematics, the Frobenius inner product is a binary operation that takes two matrices and returns a scalar. It is often denoted . The operation is a component-wise inner product of two matrices as though they are vectors, and satisfies the axioms for an inner product. The two matrices must have the same dimension - same number of rows and columns, but are not restricted to be square matrices.

References

  1. Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN   0-321-28713-4.
  2. Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN   0-03-010567-6.
  3. Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN   0-387-98258-2.
  4. Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN   0-471-43334-9.
  5. Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN   0-387-95385-X.
  6. Weisstein, Eric W. "Scalar Multiplication". mathworld.wolfram.com. Retrieved 2020-09-06.