In mathematics, a **matrix** (pl.: **matrices**) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

- Definition
- Size
- Notation
- Basic operations
- Addition, scalar multiplication, subtraction and transposition
- Matrix multiplication
- Row operations
- Submatrix
- Linear equations
- Linear transformations
- Square matrix
- Main types
- Main operations
- Computational aspects
- Decomposition
- Abstract algebraic aspects and generalizations
- Matrices with more general entries
- Relationship to linear maps
- Matrix groups
- Infinite matrices
- Empty matrix
- Applications
- Graph theory
- Analysis and geometry
- Probability theory and statistics
- Symmetries and transformations in physics
- Linear combinations of quantum states
- Normal modes
- Geometrical optics
- Electronics
- History
- Other historical usages of the word "matrix" in mathematics
- See also
- Notes
- References
- Physics references
- Historical references
- Further reading
- External links

For example,

is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension .

Matrices are used to represent linear maps and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents the composition of linear maps.

Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices.^{ [1] } This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such.

Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated to the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant, and the eigenvalues of a square matrix are the roots of a polynomial determinant.

In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimension. Matrices are used in most areas of mathematics and most scientific fields, either directly, or through their use in geometry and numerical analysis.

**Matrix theory** is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.

A *matrix* is a rectangular array of numbers (or other mathematical objects), called the *entries* of the matrix. Matrices are subject to standard operations such as addition and multiplication.^{ [2] } Most commonly, a matrix over a field *F* is a rectangular array of elements of *F*.^{ [3] }^{ [4] } A **real matrix** and a **complex matrix** are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:

The numbers, symbols, or expressions in the matrix are called its *entries* or its *elements*. The horizontal and vertical lines of entries in a matrix are called *rows* and *columns*, respectively.

The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the numbers of rows and columns a matrix (in the usual sense) can have as long as they are positive integers. A matrix with rows and columns is called an matrix, or -by- matrix, where and are called its *dimensions*. For example, matrix above is a matrix.

Matrices with a single row are called * row vectors *, and those with a single column are called * column vectors *. A matrix with the same number of rows and columns is called a * square matrix *.^{ [5] } A matrix with an infinite number of rows or columns (or both) is called an *infinite matrix*. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an *empty matrix*.

Name | Size | Example | Description | Notation |
---|---|---|---|---|

Row vector | 1 × n | A matrix with one row, sometimes used to represent a vector | ||

Column vector | n × 1 | A matrix with one column, sometimes used to represent a vector | ||

Square matrix | n × n | A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing. |

The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an matrix is represented as

This may be abbreviated by writing only a single generic term, possibly along with indices, as in

or in the case that .

Matrices are usually symbolized using upper-case letters (such as in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., , or ), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface roman (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, as in .

The entry in the *i*-th row and *j*-th column of a matrix **A** is sometimes referred to as the or entry of the matrix, and commonly denoted by or . Alternative notations for that entry are and . For example, the entry of the following matrix is 5 (also denoted , , or ):

Sometimes, the entries of a matrix can be defined by a formula such as . For example, each of the entries of the following matrix is determined by the formula .

In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses. For example, the matrix above is defined as or . If matrix size is , the above-mentioned formula is valid for any and any . This can be either specified separately, or indicated using as a subscript. For instance, the matrix above is , and can be defined as or .

Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an *m*-by-*n* matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an *m*-by-*n* matrix are indexed by and .^{ [6] } This article follows the more common convention in mathematical writing where enumeration starts from 1.

The set of all *m*-by-*n* real matrices is often denoted or The set of all *m*-by-*n* matrices over another field, or over a ring R, is similarly denoted or If *m* = *n*, such as in the case of square matrices, one does not repeat the dimension: or ^{ [7] } Often, , or , is used in place of

There are a number of basic operations that can be applied on matrices. Some, such as *transposition* and *submatrix* do not depend on the nature of the entries. Others, such as *matrix addition*, *scalar multiplication*, *matrix multiplication*, and *row operations* involve operations on matrix entries and therefore require that matrix entries are numbers or belong to a field or a ring.^{ [8] }

In this section, it is supposed that matrix entries belong to a fixed ring, that is typically a field of numbers.

- Addition

The *sum***A**+**B** of two *m*-by-*n* matrices **A** and **B** is calculated entrywise:

- (
**A**+**B**)_{i,j}=**A**_{i,j}+**B**_{i,j}, where 1 ≤*i*≤*m*and 1 ≤*j*≤*n*.

For example,

- Scalar multiplication

The product *c***A** of a number *c* (also called a scalar in this context) and a matrix **A** is computed by multiplying every entry of **A** by *c*:

- (
*c***A**)_{i,j}=*c*·**A**_{i,j}.

This operation is called *scalar multiplication*, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example:

- Subtraction

The subtraction of two *m*×*n* matrices is defined by composing matrix addition with scalar multiplication by –1:

- Transposition

The *transpose* of an *m*-by-*n* matrix **A** is the *n*-by-*m* matrix **A**^{T} (also denoted **A**^{tr} or ^{t}**A**) formed by turning rows into columns and vice versa:

- (
**A**^{T})_{i,j}=**A**_{j,i}.

For example:

Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: **A** + **B** = **B** + **A**.^{ [9] } The transpose is compatible with addition and scalar multiplication, as expressed by (*c***A**)^{T} = *c*(**A**^{T}) and (**A** + **B**)^{T} = **A**^{T} + **B**^{T}. Finally, (**A**^{T})^{T} = **A**.

*Multiplication* of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If **A** is an *m*-by-*n* matrix and **B** is an *n*-by-*p* matrix, then their *matrix product***AB** is the *m*-by-*p* matrix whose entries are given by dot product of the corresponding row of **A** and the corresponding column of **B**:^{ [10] }

where 1 ≤ *i* ≤ *m* and 1 ≤ *j* ≤ *p*.^{ [11] } For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340:

Matrix multiplication satisfies the rules (**AB**)**C** = **A**(**BC**) (associativity), and (**A** + **B**)**C** = **AC** + **BC** as well as **C**(**A** + **B**) = **CA** + **CB** (left and right distributivity), whenever the size of the matrices is such that the various products are defined.^{ [12] } The product **AB** may be defined without **BA** being defined, namely if **A** and **B** are *m*-by-*n* and *n*-by-*k* matrices, respectively, and *m* ≠ *k*. Even if both products are defined, they generally need not be equal, that is:

**AB**≠**BA**,

In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors.^{ [10] } An example of two matrices not commuting with each other is:

whereas

Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product.^{ [13] } They arise in solving matrix equations such as the Sylvester equation.

There are three types of row operations:

- row addition, that is adding a row to another.
- row multiplication, that is multiplying all entries of a row by a non-zero constant;
- row switching, that is interchanging two rows of a matrix;

These operations are used in several ways, including solving linear equations and finding matrix inverses.

A **submatrix** of a matrix is a matrix obtained by deleting any collection of rows and/or columns.^{ [14] }^{ [15] }^{ [16] } For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:

The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.^{ [16] }^{ [17] }

A **principal submatrix** is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain.^{ [18] }^{ [19] } Other authors define a principal submatrix as one in which the first *k* rows and columns, for some number *k*, are the ones that remain;^{ [20] } this type of submatrix has also been called a **leading principal submatrix**.^{ [21] }

Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations. For example, if **A** is an *m*-by-*n* matrix, **x** designates a column vector (that is, *n*×1-matrix) of *n* variables *x*_{1}, *x*_{2}, ..., *x*_{n}, and **b** is an *m*×1-column vector, then the matrix equation

is equivalent to the system of linear equations^{ [22] }

Using matrices, this can be solved more compactly than would be possible by writing out all the equations separately. If *n* = *m* and the equations are independent, then this can be done by writing

where **A**^{−1} is the inverse matrix of **A**. If **A** has no inverse, solutions—if any—can be found using its generalized inverse.

Matrices and matrix multiplication reveal their essential features when related to *linear transformations*, also known as *linear maps*. A real *m*-by-*n* matrix **A** gives rise to a linear transformation **R**^{n} → **R**^{m} mapping each vector **x** in **R**^{n} to the (matrix) product **Ax**, which is a vector in **R**^{m}. Conversely, each linear transformation *f*: **R**^{n} → **R**^{m} arises from a unique *m*-by-*n* matrix **A**: explicitly, the (*i*, *j*)-entry of **A** is the *i*^{th} coordinate of *f*(**e**_{j}), where **e**_{j} = (0,...,0,1,0,...,0) is the unit vector with 1 in the *j*^{th} position and 0 elsewhere. The matrix **A** is said to represent the linear map *f*, and **A** is called the *transformation matrix* of *f*.

For example, the 2×2 matrix

can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (*a*, *b*), (*a* + *c*, *b* + *d*), and (*c*, *d*). The parallelogram pictured at the right is obtained by multiplying **A** with each of the column vectors , and in turn. These vectors define the vertices of the unit square.

The following table shows several 2×2 real matrices with the associated linear maps of **R**^{2}. The blue original is mapped to the green grid and shapes. The origin (0,0) is marked with a black point.

Horizontal shear with m = 1.25. | Reflection through the vertical axis | Squeeze mapping with r = 3/2 | Scaling by a factor of 3/2 | Rotation by π/6 = 30° |

Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps:^{ [23] } if a *k*-by-*m* matrix **B** represents another linear map *g*: **R**^{m} → **R**^{k}, then the composition *g* ∘ *f* is represented by **BA** since

- (
*g*∘*f*)(**x**) =*g*(*f*(**x**)) =*g*(**Ax**) =**B**(**Ax**) = (**BA**)**x**.

The last equality follows from the above-mentioned associativity of matrix multiplication.

The rank of a matrix **A** is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors.^{ [24] } Equivalently it is the dimension of the image of the linear map represented by **A**.^{ [25] } The rank–nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix.^{ [26] }

A square matrix is a matrix with the same number of rows and columns.^{ [5] } An *n*-by-*n* matrix is known as a square matrix of order *n.* Any two square matrices of the same order can be added and multiplied. The entries *a*_{ii} form the main diagonal of a square matrix. They lie on the imaginary line that runs from the top left corner to the bottom right corner of the matrix.

Name Example with *n*= 3Diagonal matrix Lower triangular matrix Upper triangular matrix

If all entries of **A** below the main diagonal are zero, **A** is called an *upper triangular matrix *. Similarly if all entries of *A* above the main diagonal are zero, **A** is called a *lower triangular matrix*. If all entries outside the main diagonal are zero, **A** is called a diagonal matrix.

The *identity matrix***I**_{n} of size *n* is the *n*-by-*n* matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, for example,

It is a square matrix of order *n*, and also a special kind of diagonal matrix. It is called an identity matrix because multiplication with it leaves a matrix unchanged:

**AI**_{n}=**I**_{m}**A**=**A**for any*m*-by-*n*matrix**A**.

A nonzero scalar multiple of an identity matrix is called a *scalar* matrix. If the matrix entries come from a field, the scalar matrices form a group, under matrix multiplication, that is isomorphic to the multiplicative group of nonzero elements of the field.

A square matrix **A** that is equal to its transpose, that is, **A** = **A**^{T}, is a symmetric matrix. If instead, **A** is equal to the negative of its transpose, that is, **A** = −**A**^{T}, then **A** is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy **A**^{∗} = **A**, where the star or asterisk denotes the conjugate transpose of the matrix, that is, the transpose of the complex conjugate of **A**.

By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.^{ [27] } This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below.

A square matrix **A** is called * invertible * or *non-singular* if there exists a matrix **B** such that

**AB**=**BA**=**I**_{n},^{ [28] }^{ [29] }

where **I**_{n} is the *n*×*n* identity matrix with 1s on the main diagonal and 0s elsewhere. If **B** exists, it is unique and is called the * inverse matrix * of **A**, denoted **A**^{−1}.

Positive definite matrix | Indefinite matrix |
---|---|

Points such that (Ellipse) | Points such that (Hyperbola) |

A symmetric real matrix **A** is called *positive-definite* if the associated quadratic form

*f*(**x**) =**x**^{T}**A x**

has a positive value for every nonzero vector **x** in **R**^{n}. If *f* (**x**) only yields negative values then **A** is *negative-definite*; if *f* does produce both negative and positive values then **A** is *indefinite*.^{ [30] } If the quadratic form *f* yields only non-negative values (positive or zero), the symmetric matrix is called *positive-semidefinite* (or if only non-positive values, then negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.

A symmetric matrix is positive-definite if and only if all its eigenvalues are positive, that is, the matrix is positive-semidefinite and it is invertible.^{ [31] } The table at the right shows two possibilities for 2-by-2 matrices.

Allowing as input two different vectors instead yields the bilinear form associated to **A**:^{ [32] }

*B*_{A}(**x**,**y**) =**x**^{T}**Ay**.

In the case of complex matrices, the same terminology and result apply, with *symmetric matrix*, *quadratic form*, *bilinear form*, and *transpose***x**^{T} replaced respectively by Hermitian matrix, Hermitian form, sesquilinear form, and conjugate transpose **x**^{H}.

An *orthogonal matrix* is a square matrix with real entries whose columns and rows are orthogonal unit vectors (that is, orthonormal vectors). Equivalently, a matrix **A** is orthogonal if its transpose is equal to its inverse:

which entails

where **I**_{n} is the identity matrix of size *n*.

An orthogonal matrix **A** is necessarily invertible (with inverse **A**^{−1} = **A**^{T}), unitary (**A**^{−1} = **A***), and normal (**A*****A** = **AA***). The determinant of any orthogonal matrix is either +1 or −1. A *special orthogonal matrix* is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation without reflection, i.e., the transformation preserves the orientation of the transformed structure, while every orthogonal matrix with determinant -1 reverses the orientation, i.e., is a composition of a pure reflection and a (possibly null) rotation. The identity matrices have determinant 1, and are pure rotations by an angle zero.

The complex analogue of an orthogonal matrix is a unitary matrix.

The trace, tr(**A**) of a square matrix **A** is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned above, the trace of the product of two matrices is independent of the order of the factors:

- .

This is immediate from the definition of matrix multiplication:

It follows that the trace of the product of more than two matrices is independent of cyclic permutations of the matrices, however this does not in general apply for arbitrary permutations (for example, tr(**ABC**) ≠ tr(**BAC**), in general). Also, the trace of a matrix is equal to that of its transpose, that is,

- tr(
**A**) = tr(**A**^{T}).

The *determinant* of a square matrix **A** (denoted det(**A**) or |**A**|) is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in **R**^{2}) or volume (in **R**^{3}) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.

The determinant of 2-by-2 matrices is given by

^{ [33] }

The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalises these two formulae to all dimensions.^{ [34] }

The determinant of a product of square matrices equals the product of their determinants:

- det(
**AB**) = det(**A**) · det(**B**), or using alternate notation: - |
**AB**| = |**A**| · |**B**|.^{ [35] }

Adding a multiple of any row to another row, or a multiple of any column to another column does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.^{ [36] } Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices, the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is, determinants of smaller matrices.^{ [37] } This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.^{ [38] }

A number and a non-zero vector **v** satisfying

are called an *eigenvalue* and an *eigenvector* of **A**, respectively.^{ [39] }^{ [40] } The number λ is an eigenvalue of an *n*×*n*-matrix **A** if and only if **A**−λ**I**_{n} is not invertible, which is equivalent to

^{ [41] }

The polynomial *p*_{A} in an indeterminate *X* given by evaluation of the determinant det(*X***I**_{n}−**A**) is called the characteristic polynomial of **A**. It is a monic polynomial of degree *n*. Therefore the polynomial equation *p*_{A}(λ) = 0 has at most *n* different solutions, that is, eigenvalues of the matrix.^{ [42] } They may be complex even if the entries of **A** are real. According to the Cayley–Hamilton theorem, *p*_{A}(**A**) = **0**, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix.

Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct algorithms or iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a sequence of vectors **x**_{n} converging to an eigenvector when *n* tends to infinity.^{ [43] }

To choose the most appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical linear algebra.^{ [44] } As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability.

Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example, multiplication of matrices. Calculating the matrix product of two *n*-by-*n* matrices using the definition given above needs *n*^{3} multiplications, since for any of the *n*^{2} entries of the product, *n* multiplications are necessary. The Strassen algorithm outperforms this "naive" algorithm; it needs only *n*^{2.807} multiplications.^{ [45] } A refined approach also incorporates specific features of the computing devices.

In many practical situations additional information about the matrices involved is known. An important case are sparse matrices, that is, matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems **Ax** = **b** for sparse matrices **A**, such as the conjugate gradient method.^{ [46] }

An algorithm is, roughly speaking, numerically stable, if little deviations in the input values do not lead to big deviations in the result. For example, calculating the inverse of a matrix via Laplace expansion (adj(**A**) denotes the adjugate matrix of **A**)

**A**^{−1}= adj(**A**) / det(**A**)

may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix's inverse.^{ [47] }

Most computer programming languages support arrays but are not designed with built-in commands for matrices. Instead, available external libraries provide matrix operations on arrays, in nearly all currently used programming languages. Matrix manipulation was among the earliest numerical applications of computers.^{ [48] } The original Dartmouth BASIC had built-in commands for matrix arithmetic on arrays from its second edition implementation in 1964. As early as the 1970s, some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices. Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs can be used to aid computing with matrices.^{ [49] } As of 2023, most computers have some form of built-in matrix operations at a low-level implementing the standard BLAS specification, upon which most higher-level matrix and linear algebra libraries (e.g., EISPACK, LINPACK, LAPACK) rely. While most of these libraries require a professional level of coding, LAPACK can be accessed by higher-level (and user friendly) bindings such as NumPy/SciPy, R, GNU Octave, MATLAB.

There are several methods to render matrices into a more easily accessible form. They are generally referred to as *matrix decomposition* or *matrix factorization* techniques. The interest of all these techniques is that they preserve certain properties of the matrices in question, such as determinant, rank, or inverse, so that these quantities can be calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out for some types of matrices.

The LU decomposition factors matrices as a product of lower (**L**) and an upper triangular matrices (**U**).^{ [50] } Once this decomposition is calculated, linear systems can be solved more efficiently, by a simple technique called forward and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The *Gaussian elimination* is a similar algorithm; it transforms any matrix to row echelon form.^{ [51] } Both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. Singular value decomposition expresses any matrix **A** as a product **UDV**^{∗}, where **U** and **V** are unitary matrices and **D** is a diagonal matrix.

The eigendecomposition or *diagonalization* expresses **A** as a product **VDV**^{−1}, where **D** is a diagonal matrix and **V** is a suitable invertible matrix.^{ [52] } If **A** can be written in this form, it is called diagonalizable. More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues λ_{1} to λ_{n} of **A**, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right.^{ [53] } Given the eigendecomposition, the *n*^{th} power of **A** (that is, *n*-fold iterated matrix multiplication) can be calculated via

**A**^{n}= (**VDV**^{−1})^{n}=**VDV**^{−1}**VDV**^{−1}...**VDV**^{−1}=**VD**^{n}**V**^{−1}

and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for **A** instead. This can be used to compute the matrix exponential *e*^{A}, a need frequently arising in solving linear differential equations, matrix logarithms and square roots of matrices.^{ [54] } To avoid numerically ill-conditioned situations, further algorithms such as the Schur decomposition can be employed.^{ [55] }

Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general fields or even rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension is tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realized as sequences of numbers, while matrices are rectangular or two-dimensional arrays of numbers.^{ [56] } Matrices, subject to certain requirements tend to form groups known as matrix groups. Similarly under certain conditions matrices form rings known as matrix rings. Though the product of matrices is not in general commutative yet certain matrices form fields known as matrix fields.

This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any field, that is, a set where addition, subtraction, multiplication, and division operations are defined and well-behaved, may be used instead of **R** or **C**, for example rational numbers or finite fields. For example, coding theory makes use of matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist only in a larger field than that of the entries of the matrix; for instance, they may be complex in the case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (for example, to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed field, such as **C**, from the outset.

More generally, matrices with entries in a ring *R* are widely used in mathematics.^{ [57] } Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set M(*n*, *R*) (also denoted M_{n}(R)^{ [7] }) of all square *n*-by-*n* matrices over *R* is a ring called matrix ring, isomorphic to the endomorphism ring of the left *R*-module *R*^{n}.^{ [58] } If the ring *R* is commutative, that is, its multiplication is commutative, then the ring M(*n*, *R*) is also an associative algebra over *R*. The determinant of square matrices over a commutative ring *R* can still be defined using the Leibniz formula; such a matrix is invertible if and only if its determinant is invertible in *R*, generalising the situation over a field *F*, where every nonzero element is invertible.^{ [59] } Matrices over superrings are called supermatrices.^{ [60] }

Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be square matrices, and thus need not be members of any ring; but their sizes must fulfill certain compatibility conditions.

Linear maps **R**^{n} → **R**^{m} are equivalent to *m*-by-*n* matrices, as described above. More generally, any linear map *f*: *V* → *W* between finite-dimensional vector spaces can be described by a matrix **A** = (*a*_{ij}), after choosing bases **v**_{1}, ..., **v**_{n} of *V*, and **w**_{1}, ..., **w**_{m} of *W* (so *n* is the dimension of *V* and *m* is the dimension of *W*), which is such that

In other words, column *j* of *A* expresses the image of **v**_{j} in terms of the basis vectors **w**_{i} of *W*; thus this relation uniquely determines the entries of the matrix **A**. The matrix depends on the choice of the bases: different choices of bases give rise to different, but equivalent matrices.^{ [61] } Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix **A**^{T} describes the transpose of the linear map given by **A**, with respect to the dual bases.^{ [62] }

These properties can be restated more naturally: the category of all matrices with entries in a field with multiplication as composition is equivalent to the category of finite-dimensional vector spaces and linear maps over this field.

More generally, the set of *m*×*n* matrices can be used to represent the *R*-linear maps between the free modules *R*^{m} and *R*^{n} for an arbitrary ring *R* with unity. When *n* = *m* composition of these maps is possible, and this gives rise to the matrix ring of *n*×*n* matrices representing the endomorphism ring of *R*^{n}.

A group is a mathematical structure consisting of a set of objects together with a binary operation, that is, an operation combining any two objects to a third, subject to certain requirements.^{ [63] } A group in which the objects are matrices and the group operation is matrix multiplication is called a *matrix group*.^{ [64] }^{ [65] } Since a group every element must be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the general linear groups.

Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (that is, a smaller group contained in) their general linear group, called a special linear group.^{ [66] } Orthogonal matrices, determined by the condition

**M**^{T}**M**=**I**,

form the orthogonal group.^{ [67] } Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with determinant 1 form a subgroup called *special orthogonal group*.

Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the symmetric group.^{ [68] } General groups can be studied using matrix groups, which are comparatively well understood, by means of representation theory.^{ [69] }

It is also possible to consider matrices with infinitely many rows and/or columns^{ [70] } even though, being infinite objects, one cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and every element in the set indexing columns, there is a well-defined entry (these index sets need not even be subsets of the natural numbers). The basic operations of addition, subtraction, scalar multiplication, and transposition can still be defined without problem; however, matrix multiplication may involve infinite summations to define the resulting entries, and these are not defined in general.

If *R* is any ring with unity, then the ring of endomorphisms of as a right *R* module is isomorphic to the ring of **column finite matrices** whose entries are indexed by , and whose columns each contain only finitely many nonzero entries. The endomorphisms of *M* considered as a left *R* module result in an analogous object, the **row finite matrices** whose rows each only have finitely many nonzero entries.

If infinite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have but a finite number of nonzero entries, for the following reason. For a matrix **A** to describe a linear map *f*: *V*→*W*, bases for both spaces must have been chosen; recall that by definition this means that every vector in the space can be written uniquely as a (finite) linear combination of basis vectors, so that written as a (column) vector *v* of coefficients, only finitely many entries *v*_{i} are nonzero. Now the columns of **A** describe the images by *f* of individual basis vectors of *V* in the basis of *W*, which is only meaningful if these columns have only finitely many nonzero entries. There is no restriction on the rows of *A* however: in the product **A**·*v* there are only finitely many nonzero coefficients of *v* involved, so every one of its entries, even if it is given as an infinite sum of products, involves only finitely many nonzero terms and is therefore well defined. Moreover, this amounts to forming a linear combination of the columns of **A** that effectively involves only finitely many of them, whence the result has only finitely many nonzero entries because each of those columns does. Products of two matrices of the given type are well defined (provided that the column-index and row-index sets match), are of the same type, and correspond to the composition of linear maps.

If *R* is a normed ring, then the condition of row or column finiteness can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously, the matrices whose row sums are absolutely convergent series also form a ring.

Infinite matrices can also be used to describe operators on Hilbert spaces, where convergence and continuity questions arise, which again results in certain constraints that must be imposed. However, the explicit point of view of matrices tends to obfuscate the matter,^{ [71] } and the abstract and more powerful tools of functional analysis can be used instead.

An *empty matrix* is a matrix in which the number of rows or columns (or both) is zero.^{ [72] }^{ [73] } Empty matrices help dealing with maps involving the zero vector space. For example, if *A* is a 3-by-0 matrix and *B* is a 0-by-3 matrix, then *AB* is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space *V* to itself, while *BA* is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite-dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants.

There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of strategies the players choose.^{ [74] } Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents.^{ [75] }

Complex numbers can be represented by particular real 2-by-2 matrices via

under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A similar interpretation is possible for quaternions ^{ [76] } and Clifford algebras in general.

Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break.^{ [77] } Computer graphics uses matrices to represent objects; to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation; and to apply image convolutions such as sharpening, blurring, edge detection, and more.^{ [78] } Matrices over a polynomial ring are important in the study of control theory.

Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method.

The adjacency matrix of a finite graph is a basic notion of graph theory.^{ [79] } It records which vertices of the graph are connected by an edge. Matrices containing just two different values (1 and 0 meaning for example "yes" and "no", respectively) are called logical matrices. The distance (or cost) matrix contains information about distances of the edges.^{ [80] } These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in which case (unless the connection network is extremely dense) the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory.

The Hessian matrix of a differentiable function *ƒ*: **R**^{n} → **R** consists of the second derivatives of *ƒ* with respect to the several coordinate directions, that is,^{ [81] }

It encodes information about the local growth behaviour of the function: given a critical point **x** = (*x*_{1}, ..., *x*_{n}), that is, a point where the first partial derivatives of *ƒ* vanish, the function has a local minimum if the Hessian matrix is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above).^{ [82] }

Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map *f*: **R**^{n} → **R**^{m}. If *f*_{1}, ..., *f*_{m} denote the components of *f*, then the Jacobi matrix is defined as^{ [83] }

If *n* > *m*, and if the rank of the Jacobi matrix attains its maximal value *m*, *f* is locally invertible at that point, by the implicit function theorem.^{ [84] }

Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has a decisive influence on the set of possible solutions of the equation in question.^{ [85] }

The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen concerning a sufficiently fine grid, which in turn can be recast as a matrix equation.^{ [86] }

Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states.^{ [87] } A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain-like absorbing states, that is, states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.^{ [88] }

Statistics also makes use of matrices in many different forms.^{ [89] } Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables.^{ [90] } Another technique using matrices are linear least squares, a method that approximates a finite set of pairs (*x*_{1}, *y*_{1}), (*x*_{2}, *y*_{2}), ..., (*x*_{N}, *y*_{N}), by a linear function

*y*_{i}≈*ax*_{i}+*b*,*i*= 1, ...,*N*

which can be formulated in terms of matrices, related to the singular value decomposition of matrices.^{ [91] }

Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.^{ [92] }^{ [93] }

Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.^{ [94] } For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.^{ [95] }

The first model of quantum mechanics (Heisenberg, 1925) represented the theory's operators by infinite-dimensional matrices acting on quantum states.^{ [96] } This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" eigenstates.^{ [97] }

Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.^{ [98] }

A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms.^{ [99] } They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.^{ [100] }

Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a *refraction matrix* describing the refraction at a lens surface, and a *translation matrix*, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.^{ [101] }

Traditional mesh analysis and nodal analysis in electronics lead to a system of linear equations that can be described with a matrix.

The behaviour of many electronic components can be described using matrices. Let *A* be a 2-dimensional vector with the component's input voltage *v*_{1} and input current *i*_{1} as its elements, and let *B* be a 2-dimensional vector with the component's output voltage *v*_{2} and output current *i*_{2} as its elements. Then the behaviour of the electronic component can be described by *B* = *H***·***A*, where *H* is a 2 x 2 matrix containing one impedance element (*h*_{12}), one admittance element (*h*_{21}), and two dimensionless elements (*h*_{11} and *h*_{22}). Calculating a circuit now reduces to multiplying matrices.

Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text * The Nine Chapters on the Mathematical Art * written in 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations,^{ [102] } including the concept of determinants. In 1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published *Ars Magna*.^{ [103] } The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683.^{ [104] } The Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book *Elements of Curves* (1659).^{ [105] } Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays.^{ [103] } Cramer presented his rule in 1750.

The term "matrix" (Latin for "womb", "dam" (non-human female animal kept for breeding), "source", "origin", "list", "register", derived from * mater *—mother^{ [106] }) was coined by James Joseph Sylvester in 1850,^{ [107] } who understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains:^{ [108] }

I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent.

Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead, he defined operations such as addition, subtraction, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held true. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition.^{ [103] } Early matrix theory had limited the use of arrays almost exclusively to determinants and Arthur Cayley's abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858 Cayley published his *A memoir on the theory of matrices*^{ [109] }^{ [110] } in which he proposed and demonstrated the Cayley–Hamilton theorem.^{ [103] }

The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use of the notation **A** = [*a*_{i,j}] to represent a matrix where *a*_{i,j} refers to the* i*th row and the *j*th column.^{ [103] }

The modern study of determinants sprang from several sources.^{ [111] } Number-theoretical problems led Gauss to relate coefficients of quadratic forms, that is, expressions such as *x*^{2} + *xy* − 2*y*^{2}, and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix **A** = [*a*_{i,j}] the following: replace the powers *a*_{j}^{k} by *a*_{jk} in the polynomial

- ,

where denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real.^{ [112] } Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—which can be used to describe geometric transformations at a local (or infinitesimal) level, see above. Kronecker's *Vorlesungen über die Theorie der Determinanten*^{ [113] } and Weierstrass' *Zur Determinantentheorie*,^{ [114] } both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established.

Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century, the Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Wilhelm Jordan. In the early 20th century, matrices attained a central role in linear algebra,^{ [115] } partially due to their use in classification of the hypercomplex number systems of the previous century.

The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns.^{ [116] } Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions.

The word has been used in unusual ways by at least two authors of historical importance.

Bertrand Russell and Alfred North Whitehead in their * Principia Mathematica * (1910–1913) use the word "matrix" in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the "bottom" (0 order) the function is identical to its extension:^{ [117] }

Let us give the name of

matrixto any function, of however many variables, that does not involve any apparent variables. Then, any possible function other than a matrix derives from a matrix by means of generalization, that is, by considering the proposition that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined.

For example, a function Φ(*x, y*) of two variables *x* and *y* can be reduced to a *collection* of functions of a single variable, for example, *y*, by "considering" the function for all possible values of "individuals" *a _{i}* substituted in place of variable

- ∀
*b*∀_{j}*a*: Φ(_{i}*a*,_{i}*b*)._{j}

Alfred Tarski in his 1946 *Introduction to Logic* used the word "matrix" synonymously with the notion of truth table as used in mathematical logic.^{ [118] }

- List of named matrices
- Algebraic multiplicity – Multiplicity of an eigenvalue as a root of the characteristic polynomial
- Geometric multiplicity – Dimension of the eigenspace associated with an eigenvalue
- Gram–Schmidt process – Orthonormalization of a set of vectors
- Irregular matrix
- Matrix calculus – Specialized notation for multivariable calculus
- Matrix function – Function that maps matrices to matrices
- Matrix multiplication algorithm
- Tensor — A generalization of matrices with any number of indices
- Bohemian matrices – Set of matrices

- ↑ However, in the case of adjacency matrices, matrix multiplication or a variant of it allows the simultaneous computation of the number of paths between any two vertices, and of the shortest length of a path between two vertices.
- ↑ Lang 2002
- ↑ Fraleigh (1976 , p. 209)
- ↑ Nering (1970 , p. 37)
- 1 2 Weisstein, Eric W. "Matrix".
*mathworld.wolfram.com*. Retrieved 2020-08-19. - ↑ Oualline 2003 , Ch. 5
- 1 2 Pop; Furdui (2017).
*Square Matrices of Order 2*. Springer International Publishing. ISBN 978-3-319-54938-5. - ↑ Brown 1991 , Definition I.2.1 (addition), Definition I.2.4 (scalar multiplication), and Definition I.2.33 (transpose)
- ↑ Brown 1991 , Theorem I.2.6
- 1 2 "How to Multiply Matrices".
*www.mathsisfun.com*. Retrieved 2020-08-19. - ↑ Brown 1991 , Definition I.2.20
- ↑ Brown 1991 , Theorem I.2.24
- ↑ Horn&Johnson 1985 , Ch. 4 and 5
- ↑ Bronson (1970 , p. 16)
- ↑ Kreyszig (1972 , p. 220)
- 1 2 Protter & Morrey (1970 , p. 869)
- ↑ Kreyszig (1972 , pp. 241, 244)
- ↑ Schneider, Hans; Barker, George Phillip (2012),
*Matrices and Linear Algebra*, Dover Books on Mathematics, Courier Dover Corporation, p. 251, ISBN 978-0-486-13930-2 . - ↑ Perlis, Sam (1991),
*Theory of Matrices*, Dover books on advanced mathematics, Courier Dover Corporation, p. 103, ISBN 978-0-486-66810-9 . - ↑ Anton, Howard (2010),
*Elementary Linear Algebra*(10th ed.), John Wiley & Sons, p. 414, ISBN 978-0-470-45821-1 . - ↑ Horn, Roger A.; Johnson, Charles R. (2012),
*Matrix Analysis*(2nd ed.), Cambridge University Press, p. 17, ISBN 978-0-521-83940-2 . - ↑ Brown 1991 , I.2.21 and 22
- ↑ Greub 1975 , Section III.2
- ↑ Brown 1991 , Definition II.3.3
- ↑ Greub 1975 , Section III.1
- ↑ Brown 1991 , Theorem II.3.22
- ↑ Horn&Johnson 1985 , Theorem 2.5.6
- ↑ Brown 1991 , Definition I.2.28
- ↑ Brown 1991 , Definition I.5.13
- ↑ Horn&Johnson 1985 , Chapter 7
- ↑ Horn&Johnson 1985 , Theorem 7.2.1
- ↑ Horn&Johnson 1985 , Example 4.0.6, p. 169
- ↑ "Matrix | mathematics".
*Encyclopedia Britannica*. Retrieved 2020-08-19. - ↑ Brown 1991 , Definition III.2.1
- ↑ Brown 1991 , Theorem III.2.12
- ↑ Brown 1991 , Corollary III.2.16
- ↑ Mirsky 1990 , Theorem 1.4.1
- ↑ Brown 1991 , Theorem III.3.18
- ↑
*Eigen*means "own" in German and in Dutch. - ↑ Brown 1991 , Definition III.4.1
- ↑ Brown 1991 , Definition III.4.9
- ↑ Brown 1991 , Corollary III.4.10
- ↑ Householder 1975 , Ch. 7
- ↑ Bau III&Trefethen 1997
- ↑ Golub&Van Loan 1996 , Algorithm 1.3.1
- ↑ Golub&Van Loan 1996 , Chapters 9 and 10, esp. section 10.2
- ↑ Golub&Van Loan 1996 , Chapter 2.3
- ↑ Grcar, Joseph F. (2011-01-01). "John von Neumann's Analysis of Gaussian Elimination and the Origins of Modern Numerical Analysis".
*SIAM Review*.**53**(4): 607–682. doi:10.1137/080734716. ISSN 0036-1445. - ↑ For example, Mathematica, see Wolfram 2003 , Ch. 3.7
- ↑ Press,Flannery&Teukolskyet al. 1992
- ↑ Stoer&Bulirsch 2002 , Section 4.1
- ↑ Horn&Johnson 1985 , Theorem 2.5.4
- ↑ Horn&Johnson 1985 , Ch. 3.1, 3.2
- ↑ Arnold&Cooke 1992 , Sections 14.5, 7, 8
- ↑ Bronson 1989 , Ch. 15
- ↑ Coburn 1955 , Ch. V
- ↑ Lang 2002 , Chapter XIII
- ↑ Lang 2002 , XVII.1, p. 643
- ↑ Lang 2002 , Proposition XIII.4.16
- ↑ Reichl 2004 , Section L.2
- ↑ Greub 1975 , Section III.3
- ↑ Greub 1975 , Section III.3.13
- ↑ See any standard reference in a group.
- ↑ Additionally, the group must be closed in the general linear group.
- ↑ Baker 2003 , Def. 1.30
- ↑ Baker 2003 , Theorem 1.2
- ↑ Artin 1991 , Chapter 4.5
- ↑ Rowen 2008 , Example 19.2, p. 198
- ↑ See any reference in representation theory or group representation.
- ↑ See the item "Matrix" in Itõ, ed. 1987
- ↑ "Not much of matrix theory carries over to infinite-dimensional spaces, and what does is not so useful, but it sometimes helps." Halmos 1982 , p. 23, Chapter 5
- ↑ "Empty Matrix: A matrix is empty if either its row or column dimension is zero", Glossary Archived 2009-04-29 at the Wayback Machine , O-Matrix v6 User Guide
- ↑ "A matrix having at least one dimension equal to zero is called an empty matrix", MATLAB Data Structures Archived 2009-12-28 at the Wayback Machine
- ↑ Fudenberg&Tirole 1983 , Section 1.1.1
- ↑ Manning 1999 , Section 15.3.4
- ↑ Ward 1997 , Ch. 2.8
- ↑ Stinson 2005 , Ch. 1.1.5 and 1.2.4
- ↑ Association for Computing Machinery 1979 , Ch. 7
- ↑ Godsil&Royle 2004 , Ch. 8.1
- ↑ Punnen 2002
- ↑ Lang 1987a , Ch. XVI.6
- ↑ Nocedal 2006 , Ch. 16
- ↑ Lang 1987a , Ch. XVI.1
- ↑ Lang 1987a , Ch. XVI.5. For a more advanced, and more general statement see Lang 1969 , Ch. VI.2
- ↑ Gilbarg&Trudinger 2001
- ↑ Šolin 2005 , Ch. 2.5. See also stiffness method.
- ↑ Latouche&Ramaswami 1999
- ↑ Mehata&Srinivasan 1978 , Ch. 2.8
- ↑ Healy, Michael (1986),
*Matrices for Statistics*, Oxford University Press, ISBN 978-0-19-850702-4 - ↑ Krzanowski 1988 , Ch. 2.2., p. 60
- ↑ Krzanowski 1988 , Ch. 4.1
- ↑ Conrey 2007
- ↑ Zabrodin,Brezin&Kazakovet al. 2006
- ↑ Itzykson&Zuber 1980 , Ch. 2
- ↑ see Burgess&Moore 2007 , section 1.6.3. (SU(3)), section 2.4.3.2. (Kobayashi–Maskawa matrix)
- ↑ Schiff 1968 , Ch. 6
- ↑ Bohm 2001 , sections II.4 and II.8
- ↑ Weinberg 1995 , Ch. 3
- ↑ Wherrett 1987 , part II
- ↑ Riley,Hobson&Bence 1997 , 7.17
- ↑ Guenther 1990 , Ch. 5
- ↑ Shen,Crossley&Lun 1999 cited by Bretscher 2005 , p. 1
- 1 2 3 4 5
*Discrete Mathematics*4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 ISBN 978-0-321-07912-1, p. 564-565 - ↑ Needham, Joseph; Wang Ling (1959).
*Science and Civilisation in China*. Vol. III. Cambridge: Cambridge University Press. p. 117. ISBN 978-0-521-05801-8. - ↑
*Discrete Mathematics*4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 ISBN 978-0-321-07912-1, p. 564 - ↑
*Merriam-Webster dictionary*, Merriam-Webster, retrieved April 20, 2009 - ↑ Although many sources state that J. J. Sylvester coined the mathematical term "matrix" in 1848, Sylvester published nothing in 1848. (For proof that Sylvester published nothing in 1848, see: J. J. Sylvester with H. F. Baker, ed.,
*The Collected Mathematical Papers of James Joseph Sylvester*(Cambridge, England: Cambridge University Press, 1904), vol. 1.) His earliest use of the term "matrix" occurs in 1850 in J. J. Sylvester (1850) "Additions to the articles in the September number of this journal, "On a new class of theorems," and on Pascal's theorem,"*The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science*,**37**: 363-370. From page 369: "For this purpose, we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This does not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants ... " - ↑ The Collected Mathematical Papers of James Joseph Sylvester: 1837–1853, Paper 37, p. 247
- ↑
*Phil.Trans.*1858, vol.148, pp.17-37*Math. Papers II*475-496 - ↑ Dieudonné, ed. 1978 , Vol. 1, Ch. III, p. 96
- ↑ Knobloch 1994
- ↑ Hawkins 1975
- ↑ Kronecker 1897
- ↑ Weierstrass 1915 , pp. 271–286
- ↑ Bôcher 2004
- ↑ Mehra&Rechenberg 1987
- ↑ Whitehead, Alfred North; and Russell, Bertrand (1913)
*Principia Mathematica to *56*, Cambridge at the University Press, Cambridge UK (republished 1962) cf page 162ff. - ↑ Tarski, Alfred; (1946)
*Introduction to Logic and the Methodology of Deductive Sciences*, Dover Publications, Inc, New York NY, ISBN 0-486-28462-X.

In mathematics, the **determinant** is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix *A* is commonly denoted det(*A*), det *A*, or |*A*|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

In mathematics, **Gaussian elimination**, also known as **row reduction**, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

**Linear algebra** is the branch of mathematics concerning linear equations such as:

In linear algebra, the **rank** of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

In linear algebra, the **identity matrix** of size is the square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1.

In linear algebra, the **column space** of a matrix *A* is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

In linear algebra, the **outer product** of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions *n* and *m*, then their outer product is an *n* × *m* matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

In linear algebra, an **orthogonal matrix**, or **orthonormal matrix**, is a real square matrix whose columns and rows are orthonormal vectors.

In mathematics, particularly in linear algebra, **matrix multiplication** is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the **matrix product**, has the number of rows of the first and the number of columns of the second matrix. The product of matrices **A** and **B** is denoted as **AB**.

In linear algebra, **Cramer's rule** is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In mathematics, a **square matrix** is a matrix with the same number of rows and columns. An *n*-by-*n* matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In linear algebra, the **transpose** of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix **A** by producing another matrix, often denoted by **A**^{T}.

In linear algebra, a **diagonal matrix** is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called *scalar matrix*, for example, . In geometry, a diagonal matrix may be used as a *scaling matrix*, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In linear algebra, an n-by-n square matrix **A** is called **invertible** if there exists an n-by-n square matrix **B** such that

In linear algebra, a **minor** of a matrix **A** is the determinant of some smaller square matrix, cut down from **A** by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix **cofactors**, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In mathematics, a **block matrix** or a **partitioned matrix** is a matrix that is *interpreted* as having been broken into sections called **blocks** or **submatrices**.

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then

In mathematics, the **kernel** of a linear map, also known as the **null space** or **nullspace**, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map *L* : *V* → *W* between two vector spaces V and W, the kernel of L is the vector space of all elements **v** of V such that *L*(**v**) = **0**, where **0** denotes the zero vector in W, or more symbolically:

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An **eigenvector** or **characteristic vector** is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding **eigenvalue**, **characteristic value**, or **characteristic root** is the multiplying factor .

In mathematics, especially in linear algebra and matrix theory, the **vectorization** of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a *m* × *n* matrix *A*, denoted vec(*A*), is the *mn* × 1 column vector obtained by stacking the columns of the matrix *A* on top of one another:

- Anton, Howard (1987),
*Elementary Linear Algebra*(5th ed.), New York: Wiley, ISBN 0-471-84819-0 - Arnold, Vladimir I.; Cooke, Roger (1992),
*Ordinary differential equations*, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-54813-3 - Artin, Michael (1991),
*Algebra*, Prentice Hall, ISBN 978-0-89871-510-1 - Association for Computing Machinery (1979),
*Computer Graphics*, Tata McGraw–Hill, ISBN 978-0-07-059376-3 - Baker, Andrew J. (2003),
*Matrix Groups: An Introduction to Lie Group Theory*, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-85233-470-3 - Bau III, David; Trefethen, Lloyd N. (1997),
*Numerical linear algebra*, Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-361-9 - Beauregard, Raymond A.; Fraleigh, John B. (1973),
*A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields*, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X - Bretscher, Otto (2005),
*Linear Algebra with Applications*(3rd ed.), Prentice Hall - Bronson, Richard (1970),
*Matrix Methods: An Introduction*, New York: Academic Press, LCCN 70097490 - Bronson, Richard (1989),
*Schaum's outline of theory and problems of matrix operations*, New York: McGraw–Hill, ISBN 978-0-07-007978-6 - Brown, William C. (1991),
*Matrices and vector spaces*, New York, NY: Marcel Dekker, ISBN 978-0-8247-8419-5 - Coburn, Nathaniel (1955),
*Vector and tensor analysis*, New York, NY: Macmillan, OCLC 1029828 - Conrey, J. Brian (2007),
*Ranks of elliptic curves and random matrix theory*, Cambridge University Press, ISBN 978-0-521-69964-8 - Fraleigh, John B. (1976),
*A First Course In Abstract Algebra*(2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1 - Fudenberg, Drew; Tirole, Jean (1983),
*Game Theory*, MIT Press - Gilbarg, David; Trudinger, Neil S. (2001),
*Elliptic partial differential equations of second order*(2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-41160-4 - Godsil, Chris; Royle, Gordon (2004),
*Algebraic Graph Theory*, Graduate Texts in Mathematics, vol. 207, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95220-8 - Golub, Gene H.; Van Loan, Charles F. (1996),
*Matrix Computations*(3rd ed.), Johns Hopkins, ISBN 978-0-8018-5414-9 - Greub, Werner Hildbert (1975),
*Linear algebra*, Graduate Texts in Mathematics, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90110-7 - Halmos, Paul Richard (1982),
*A Hilbert space problem book*, Graduate Texts in Mathematics, vol. 19 (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90685-0, MR 0675952 - Horn, Roger A.; Johnson, Charles R. (1985),
*Matrix Analysis*, Cambridge University Press, ISBN 978-0-521-38632-6 - Householder, Alston S. (1975),
*The theory of matrices in numerical analysis*, New York, NY: Dover Publications, MR 0378371 - Kreyszig, Erwin (1972),
*Advanced Engineering Mathematics*(3rd ed.), New York: Wiley, ISBN 0-471-50728-8 . - Krzanowski, Wojtek J. (1988),
*Principles of multivariate analysis*, Oxford Statistical Science Series, vol. 3, The Clarendon Press Oxford University Press, ISBN 978-0-19-852211-9, MR 0969370 - Itô, Kiyosi, ed. (1987),
*Encyclopedic dictionary of mathematics. Vol. I-IV*(2nd ed.), MIT Press, ISBN 978-0-262-09026-1, MR 0901762 - Lang, Serge (1969),
*Analysis II*, Addison-Wesley - Lang, Serge (1987a),
*Calculus of several variables*(3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96405-8 - Lang, Serge (1987b),
*Linear algebra*, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96412-6 - Lang, Serge (2002),
*Algebra*, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 - Latouche, Guy; Ramaswami, Vaidyanathan (1999),
*Introduction to matrix analytic methods in stochastic modeling*(1st ed.), Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-425-8 - Manning, Christopher D.; Schütze, Hinrich (1999),
*Foundations of statistical natural language processing*, MIT Press, ISBN 978-0-262-13360-9 - Mehata, K. M.; Srinivasan, S. K. (1978),
*Stochastic processes*, New York, NY: McGraw–Hill, ISBN 978-0-07-096612-3 - Mirsky, Leonid (1990),
*An Introduction to Linear Algebra*, Courier Dover Publications, ISBN 978-0-486-66434-7 - Nering, Evar D. (1970),
*Linear Algebra and Matrix Theory*(2nd ed.), New York: Wiley, LCCN 76-91646 - Nocedal, Jorge; Wright, Stephen J. (2006),
*Numerical Optimization*(2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, p. 449, ISBN 978-0-387-30303-1 - Oualline, Steve (2003),
*Practical C++ programming*, O'Reilly, ISBN 978-0-596-00419-4 - Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), "LU Decomposition and Its Applications" (PDF),
*Numerical Recipes in FORTRAN: The Art of Scientific Computing*(2nd ed.), Cambridge University Press, pp. 34–42, archived from the original on 2009-09-06`{{citation}}`

: CS1 maint: unfit URL (link) - Protter, Murray H.; Morrey, Charles B. Jr. (1970),
*College Calculus with Analytic Geometry*(2nd ed.), Reading: Addison-Wesley, LCCN 76087042 - Punnen, Abraham P.; Gutin, Gregory (2002),
*The traveling salesman problem and its variations*, Boston, MA: Kluwer Academic Publishers, ISBN 978-1-4020-0664-7 - Reichl, Linda E. (2004),
*The transition to chaos: conservative classical systems and quantum manifestations*, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-98788-0 - Rowen, Louis Halle (2008),
*Graduate Algebra: noncommutative view*, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4153-2 - Šolin, Pavel (2005),
*Partial Differential Equations and the Finite Element Method*, Wiley-Interscience, ISBN 978-0-471-76409-0 - Stinson, Douglas R. (2005),
*Cryptography*, Discrete Mathematics and its Applications, Chapman & Hall/CRC, ISBN 978-1-58488-508-5 - Stoer, Josef; Bulirsch, Roland (2002),
*Introduction to Numerical Analysis*(3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95452-3 - Ward, J. P. (1997),
*Quaternions and Cayley numbers*, Mathematics and its Applications, vol. 403, Dordrecht, NL: Kluwer Academic Publishers Group, doi:10.1007/978-94-011-5768-1, ISBN 978-0-7923-4513-8, MR 1458894 - Wolfram, Stephen (2003),
*The Mathematica Book*(5th ed.), Champaign, IL: Wolfram Media, ISBN 978-1-57955-022-6

- Bohm, Arno (2001),
*Quantum Mechanics: Foundations and Applications*, Springer, ISBN 0-387-95330-2 - Burgess, Cliff; Moore, Guy (2007),
*The Standard Model. A Primer*, Cambridge University Press, ISBN 978-0-521-86036-9 - Guenther, Robert D. (1990),
*Modern Optics*, John Wiley, ISBN 0-471-60538-7 - Itzykson, Claude; Zuber, Jean-Bernard (1980),
*Quantum Field Theory*, McGraw–Hill, ISBN 0-07-032071-3 - Riley, Kenneth F.; Hobson, Michael P.; Bence, Stephen J. (1997),
*Mathematical methods for physics and engineering*, Cambridge University Press, ISBN 0-521-55506-X - Schiff, Leonard I. (1968),
*Quantum Mechanics*(3rd ed.), McGraw–Hill - Weinberg, Steven (1995),
*The Quantum Theory of Fields. Volume I: Foundations*, Cambridge University Press, ISBN 0-521-55001-7 - Wherrett, Brian S. (1987),
*Group Theory for Atoms, Molecules and Solids*, Prentice–Hall International, ISBN 0-13-365461-3 - Zabrodin, Anton; Brezin, Édouard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006),
*Applications of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry)*, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-4020-4530-1

- A. Cayley
*A memoir on the theory of matrices*. Phil. Trans. 148 1858 17–37; Math. Papers II 475–496 - Bôcher, Maxime (2004),
*Introduction to higher algebra*, New York, NY: Dover Publications, ISBN 978-0-486-49570-5 , reprint of the 1907 original edition - Cayley, Arthur (1889),
*The collected mathematical papers of Arthur Cayley*, vol. I (1841–1853), Cambridge University Press, pp. 123–126 - Dieudonné, Jean, ed. (1978),
*Abrégé d'histoire des mathématiques 1700-1900*, Paris, FR: Hermann - Hawkins, Thomas (1975), "Cauchy and the spectral theory of matrices",
*Historia Mathematica*,**2**: 1–29, doi:10.1016/0315-0860(75)90032-4, ISSN 0315-0860, MR 0469635 - Knobloch, Eberhard (1994), "From Gauss to Weierstrass: determinant theory and its historical evaluations",
*The intersection of history and mathematics*, Science Networks Historical Studies, vol. 15, Basel, Boston, Berlin: Birkhäuser, pp. 51–66, MR 1308079 - Kronecker, Leopold (1897), Hensel, Kurt (ed.),
*Leopold Kronecker's Werke*, Teubner - Mehra, Jagdish; Rechenberg, Helmut (1987),
*The Historical Development of Quantum Theory*(1st ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96284-9 - Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999),
*Nine Chapters of the Mathematical Art, Companion and Commentary*(2nd ed.), Oxford University Press, ISBN 978-0-19-853936-0 - Weierstrass, Karl (1915),
*Collected works*, vol. 3

- "Matrix",
*Encyclopedia of Mathematics*, EMS Press, 2001 [1994] - Kaw, Autar K. (September 2008),
*Introduction to Matrix Algebra*, Lulu.com, ISBN 978-0-615-25126-4 -
*The Matrix Cookbook*(PDF), retrieved 24 March 2014 - Brookes, Mike (2005),
*The Matrix Reference Manual*, London: Imperial College , retrieved 10 Dec 2008

- MacTutor: Matrices and determinants
- Matrices and Linear Algebra on the Earliest Uses Pages
- Earliest Uses of Symbols for Matrices and Vectors

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.