In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix is the list of entries where . All off-diagonal elements are zero in a diagonal matrix. The following four matrices have their main diagonals indicated by red ones:
For a square matrix, the diagonal (or main diagonal or principal diagonal) is the diagonal line of entries running from the top-left corner to the bottom-right corner. [1] [2] [3] For a matrix with row index specified by and column index specified by , these would be entries with . For example, the identity matrix can be defined as having entries of 1 on the main diagonal and zeroes elsewhere:
The trace of a matrix is the sum of the diagonal elements.
The top-right to bottom-left diagonal is sometimes described as the minor diagonal or antidiagonal.
The off-diagonal entries are those not on the main diagonal. A diagonal matrix is one whose off-diagonal entries are all zero. [4] [5]
A superdiagonal entry is one that is directly above and to the right of the main diagonal. [6] [7] Just as diagonal entries are those with , the superdiagonal entries are those with . For example, the non-zero entries of the following matrix all lie in the superdiagonal:
Likewise, a subdiagonal entry is one that is directly below and to the left of the main diagonal, that is, an entry with . [8] General matrix diagonals can be specified by an index measured relative to the main diagonal: the main diagonal has ; the superdiagonal has ; the subdiagonal has ; and in general, the -diagonal consists of the entries with .
A banded matrix is one for which its non-zero elements are restricted to a diagonal band. A tridiagonal matrix has only the main diagonal, superdiagonal, and subdiagonal entries as non-zero.
The antidiagonal (sometimes counter diagonal, secondary diagonal (*), trailing diagonal, minor diagonal, off diagonal, or bad diagonal) of an order square matrix is the collection of entries such that for all . That is, it runs from the top right corner to the bottom left corner.
(*) Secondary (as well as trailing, minor and off) diagonals very often also mean the (a.k.a. k-th) diagonals parallel to the main or principal diagonals, i.e., for some nonzero k =1, 2, 3, ... More generally and universally, the off diagonal elements of a matrix are all elements not on the main diagonal, i.e., with distinct indices i ≠ j.
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such thatwhere In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix.
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In mathematics the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.
In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.
In linear algebra, a nilpotent matrix is a square matrix N such that
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix is idempotent if and only if . For this product to be defined, must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings.
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
In mathematics, particularly matrix theory and combinatorics, a Pascal matrix is a matrix containing the binomial coefficients as its elements. It is thus an encoding of Pascal's triangle in matrix form. There are three natural ways to achieve this: as a lower-triangular matrix, an upper-triangular matrix, or a symmetric matrix. For example, the 5 × 5 matrices are:
In mathematics, a shift matrix is a binary matrix with ones only on the superdiagonal or subdiagonal, and zeroes elsewhere. A shift matrix U with ones on the superdiagonal is an upper shift matrix. The alternative subdiagonal matrix L is unsurprisingly known as a lower shift matrix. The (i, j )th component of U and L are
In mathematics, particularly matrix theory, the n×nLehmer matrix is the constant symmetric matrix defined by
In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.
A quaternionic matrix is a matrix whose elements are quaternions.