Combinatorial matrix theory is a branch of linear algebra and combinatorics that studies matrices in terms of the patterns of nonzeros and of positive and negative values in their coefficients. [1] [2] [3]
Concepts and topics studied within combinatorial matrix theory include:
Researchers in combinatorial matrix theory include Richard A. Brualdi and Pauline van den Driessche.
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.
In mathematics, a generalized permutation matrix is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero value. An example of a generalized permutation matrix is
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.
In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.
Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as in block designs, while at other times it could involve the spatial arrangement of entries in an array as in sudoku grids.
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,
A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if
In mathematics, a conference matrix (also called a C-matrix) is a square matrix C with 0 on the diagonal and +1 and −1 off the diagonal, such that CTC is a multiple of the identity matrix I. Thus, if the matrix has order n, CTC = (n−1)I. Some authors use a more general definition, which requires there to be a single 0 in each row and column but not necessarily on the diagonal.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.
Herbert John Ryser was a professor of mathematics, widely regarded as one of the major figures in combinatorics in the 20th century. He is the namesake of the Bruck–Ryser–Chowla theorem, Ryser's formula for the computation of the permanent of a matrix, and Ryser's conjecture.
In graph theory, the bipartite double cover of an undirected graph G is a bipartite, covering graph of G, with twice as many vertices as G. It can be constructed as the tensor product of graphs, G × K2. It is also called the Kronecker double cover, canonical double cover or simply the bipartite double of G.
Richard A. Brualdi is a professor emeritus of combinatorial mathematics at the University of Wisconsin–Madison.
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems, quadratic-programming problems, and linear complementarity problems.
The Gale–Ryser theorem is a result in graph theory and combinatorial matrix theory, two branches of combinatorics. It provides one of two known approaches to solving the bipartite realization problem, i.e. it gives a necessary and sufficient condition for two finite sequences of natural numbers to be the degree sequence of a labeled simple bipartite graph; a sequence obeying these conditions is called "bigraphic". It is an analog of the Erdős–Gallai theorem for simple graphs. The theorem was published independently in 1957 by H. J. Ryser and David Gale.
In discrete mathematics, the Bregman–Minc inequality, or Bregman's theorem, allows one to estimate the permanent of a binary matrix via its row or column sums. The inequality was conjectured in 1963 by Henryk Minc and first proved in 1973 by Lev M. Bregman. Further entropy-based proofs have been given by Alexander Schrijver and Jaikumar Radhakrishnan. The Bregman–Minc inequality is used, for example, in graph theory to obtain upper bounds for the number of perfect matchings in a bipartite graph.
In finite geometry, Lam's problem is the problem of determining if a finite projective plane of order ten exists. The order ten case is the first theoretically uncertain case, as all smaller orders can be resolved by purely theoretical means. Lam's problem is named after Clement W. H. Lam who experimentally determined that projective planes of order ten do not exist via exhaustive computational searches.