In mathematics, specifically linear algebra, a real symmetric matrix A is copositive if
for every nonnegative vector (where the inequalities should be understood coordinate-wise). Some authors do not require A to be symmetric. [1] The collection of all copositive matrices is a proper cone; [2] it includes as a subset the collection of real positive-definite matrices.
Copositive matrices find applications in economics, operations research, and statistics.
It is easy to see that the sum of two copositive matrices is a copositive matrix. More generally, any conical combination of copositive matrices is copositive.
Let A be a copositive matrix. Then we have that
Every copositive matrix of order less than 5 can be expressed as the sum of a positive semidefinite matrix and a nonnegative matrix. [4] A counterexample for order 5 is given by a copositive matrix known as Horn-matrix: [5]
The class of copositive matrices can be characterized using principal submatrices. One such characterization is due to Wilfred Kaplan: [6]
Several other characterizations are presented in a survey by Ikramov, [3] including:
The problem of deciding whether a matrix is copositive is co-NP-complete. [7]
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
In mathematics, the signature(v, p, r) of a metric tensor g (or equivalently, a real quadratic form thought of as a real symmetric bilinear form on a finite-dimensional vector space) is the number (counted with multiplicity) of positive, negative and zero eigenvalues of the real symmetric matrix gab of the metric tensor with respect to a basis. In relativistic physics, v conventionally represents the number of time or virtual dimensions, and p the number of space or physical dimensions. Alternatively, it can be defined as the dimensions of a maximal positive and null subspace. By Sylvester's law of inertia these numbers do not depend on the choice of basis and thus can be used to classify the metric. The signature is often denoted by a pair of integers (v, p) implying r = 0, or as an explicit list of signs of eigenvalues such as (+, −, −, −) or (−, +, +, +) for the signatures (1, 3, 0) and (3, 1, 0), respectively.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is a symmetric matrix, then for any invertible matrix , the number of positive, negative and zero eigenvalues of is constant. This result is particularly useful when is diagonal, as the inertia of a diagonal matrix can easily be obtained by looking at the sign of its diagonal elements.
In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ; to demography ; to social networks ; to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.
In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.
In mathematics, a nonnegative matrix, written
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is greater than or equal to the sum of the magnitudes of all the other (off-diagonal) entries in that row. More precisely, the matrix is diagonally dominant if
In mathematics, a Hamiltonian matrix is a 2n-by-2n matrix A such that JA is symmetric, where J is the skew-symmetric matrix
In mathematics, especially linear algebra, an M-matrix is a matrix whose off-diagonal entries are less than or equal to zero and whose eigenvalues have nonnegative real parts. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices. The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive.
In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite.
In linear algebra, a branch of mathematics, a (multiplicative) compound matrix is a matrix whose entries are all minors, of a given size, of another matrix. Compound matrices are closely related to exterior algebras, and their computation appears in a wide array of problems, such as in the analysis of nonlinear time-varying dynamical systems and generalizations of positive systems, cooperative systems and contracting systems.
In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.