Matrix addition

Last updated
Illustration of the addition of two matrices. Matrix addition qtl2.svg
Illustration of the addition of two matrices.

In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together. However, there are other operations which could also be considered addition for matrices, such as the direct sum and the Kronecker sum.

Contents

Entrywise sum

Two matrices must have an equal number of rows and columns to be added. [1] In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B. The sum of A and B, denoted A + B, is computed by adding corresponding elements of A and B: [2] [3]

Or more concisely (assuming that A + B = C): [4] [5]

For example:

Similarly, it is also possible to subtract one matrix from another, as long as they have the same dimensions. The difference of A and B, denoted AB, is computed by subtracting elements of B from corresponding elements of A, and has the same dimensions as A and B. For example:

Direct sum

Another operation, which is used less often, is the direct sum (denoted by ⊕). The Kronecker sum is also denoted ⊕; the context should make the usage clear. The direct sum of any pair of matrices A of size m×n and B of size p×q is a matrix of size (m + p) × (n + q) defined as: [6] [2]

For instance,

The direct sum of matrices is a special type of block matrix. In particular, the direct sum of square matrices is a block diagonal matrix.

The adjacency matrix of the union of disjoint graphs (or multigraphs) is the direct sum of their adjacency matrices. Any element in the direct sum of two vector spaces of matrices can be represented as a direct sum of two matrices.

In general, the direct sum of n matrices is: [2]

where the zeros are actually blocks of zeros (i.e., zero matrices).

Kronecker sum

The Kronecker sum is different from the direct sum, but is also denoted by ⊕. It is defined using the Kronecker product ⊗ and normal matrix addition. If A is n-by-n, B is m-by-m and denotes the k-by-k identity matrix then the Kronecker sum is defined by:

See also

Notes

  1. Elementary Linear Algebra by Rorres Anton 10e p53
  2. 1 2 3 Lipschutz & Lipson 2017.
  3. Riley, Hobson & Bence 2006.
  4. Weisstein, Eric W. "Matrix Addition". mathworld.wolfram.com. Retrieved 2020-09-07.
  5. "Finding the Sum and Difference of Two Matrices | College Algebra". courses.lumenlearning.com. Retrieved 2020-09-07.
  6. Weisstein, Eric W. "Matrix Direct Sum". MathWorld .

Related Research Articles

In linear algebra, the identity matrix of size is the square matrix with ones on the main diagonal and zeros elsewhere.

In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices.

In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product from vectors to matrices, and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only

In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):

<span class="mw-page-title-main">Vectorization (mathematics)</span> Conversion of a matrix to a vector

In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a column vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another:

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.

In analytical mechanics, the mass matrix is a symmetric matrix M that expresses the connection between the time derivative of the generalized coordinate vector q of a system and the kinetic energy T of that system, by the equation

In mathematics, the Frobenius inner product is a binary operation that takes two matrices and returns a scalar. It is often denoted . The operation is a component-wise inner product of two matrices as though they are vectors, and satisfies the axioms for an inner product. The two matrices must have the same dimension - same number of rows and columns, but are not restricted to be square matrices.

References