A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.
If R is a binary relation between the finite indexed sets X and Y (so R ⊆ X ×Y), then R can be represented by the logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of M are defined by
In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers: i ranges from 1 to the cardinality (size) of X, and j ranges from 1 to the cardinality of Y. See the article on indexed sets for more detail.
The binary relation R on the set {1, 2, 3, 4} is defined so that aRb holds if and only if a divides b evenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relation R holds.
The corresponding representation as a logical matrix is
which includes a diagonal of ones, since each number divides itself.
The matrix representation of the equality relation on a finite set is the identity matrix I, that is, the matrix whose entries on the diagonal are all 1, while the others are all 0. More generally, if relation R satisfies I ⊆ R, then R is a reflexive relation.
If the Boolean domain is viewed as a semiring, where addition corresponds to logical OR and multiplication to logical AND, the matrix representation of the composition of two relations is equal to the matrix product of the matrix representations of these relations. This product can be computed in expected time O(n2). [2]
Frequently, operations on binary matrices are defined in terms of modular arithmetic mod 2—that is, the elements are treated as elements of the Galois field . They arise in a variety of representations and have a number of more restricted special forms. They are applied e.g. in XOR-satisfiability.
The number of distinct m-by-n binary matrices is equal to 2mn, and is thus finite.
Let n and m be given and let U denote the set of all logical m × n matrices. Then U has a partial order given by
In fact, U forms a Boolean algebra with the operations and & or between two matrices applied component-wise. The complement of a logical matrix is obtained by swapping all zeros and ones for their opposite.
Every logical matrix A = (Aij) has a transpose AT = (Aji). Suppose A is a logical matrix with no columns or rows identically zero. Then the matrix product, using Boolean arithmetic, contains the m × m identity matrix, and the product contains the n × n identity.
As a mathematical structure, the Boolean algebra U forms a lattice ordered by inclusion; additionally it is a multiplicative lattice due to matrix multiplication.
Every logical matrix in U corresponds to a binary relation. These listed operations on U, and ordering, correspond to a calculus of relations, where the matrix multiplication represents composition of relations. [3]
Total | Associative | Identity | Cancellation | Commutative | |
---|---|---|---|---|---|
Partial magma | Unneeded | Unneeded | Unneeded | Unneeded | Unneeded |
Semigroupoid | Unneeded | Required | Unneeded | Unneeded | Unneeded |
Small category | Unneeded | Required | Required | Unneeded | Unneeded |
Groupoid | Unneeded | Required | Required | Required | Unneeded |
Commutative Groupoid | Unneeded | Required | Required | Required | Required |
Magma | Required | Unneeded | Unneeded | Unneeded | Unneeded |
Commutative magma | Required | Unneeded | Unneeded | Unneeded | Required |
Quasigroup | Required | Unneeded | Unneeded | Required | Unneeded |
Commutative quasigroup | Required | Unneeded | Unneeded | Required | Required |
Unital magma | Required | Unneeded | Required | Unneeded | Unneeded |
Commutative unital magma | Required | Unneeded | Required | Unneeded | Required |
Loop | Required | Unneeded | Required | Required | Unneeded |
Commutative loop | Required | Unneeded | Required | Required | Required |
Semigroup | Required | Required | Unneeded | Unneeded | Unneeded |
Commutative semigroup | Required | Required | Unneeded | Unneeded | Required |
Associative quasigroup | Required | Required | Unneeded | Required | Unneeded |
Commutative-and-associative quasigroup | Required | Required | Unneeded | Required | Required |
Monoid | Required | Required | Required | Unneeded | Unneeded |
Commutative monoid | Required | Required | Required | Unneeded | Required |
Group | Required | Required | Required | Required | Unneeded |
Abelian group | Required | Required | Required | Required | Required |
If m or n equals one, then the m × n logical matrix (mij) is a logical vector or bit string. If m = 1, the vector is a row vector, and if n = 1, it is a column vector. In either case the index equaling 1 is dropped from denotation of the vector.
Suppose and are two logical vectors. The outer product of P and Q results in an m × n rectangular relation
A reordering of the rows and columns of such a matrix can assemble all the ones into a rectangular part of the matrix. [4]
Let h be the vector of all ones. Then if v is an arbitrary logical vector, the relation R = v hT has constant rows determined by v. In the calculus of relations such an R is called a vector. [4] A particular instance is the universal relation .
For a given relation R, a maximal rectangular relation contained in R is called a concept in R. Relations may be studied by decomposing into concepts, and then noting the induced concept lattice.
Consider the table of group-like structures, where "unneeded" can be denoted 0, and "required" denoted by 1, forming a logical matrix To calculate elements of , it is necessary to use the logical inner product of pairs of logical vectors in rows of this matrix. If this inner product is 0, then the rows are orthogonal. In fact, small category is orthogonal to quasigroup, and groupoid is orthogonal to magma. Consequently there are zeros in , and it fails to be a universal relation.
Adding up all the ones in a logical matrix may be accomplished in two ways: first summing the rows or first summing the columns. When the row sums are added, the sum is the same as when the column sums are added. In incidence geometry, the matrix is interpreted as an incidence matrix with the rows corresponding to "points" and the columns as "blocks" (generalizing lines made of points). A row sum is called its point degree, and a column sum is the block degree. The sum of point degrees equals the sum of block degrees. [5]
An early problem in the area was "to find necessary and sufficient conditions for the existence of an incidence structure with given point degrees and block degrees; or in matrix language, for the existence of a (0, 1)-matrix of type v × b with given row and column sums". [5] This problem is solved by the Gale–Ryser theorem.
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In linear algebra, the identity matrix of size is the square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1.
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.
In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. Invertible matrices are the same size as their inverse.
In mathematics, an incidence matrix is a logical matrix that shows the relationship between two classes of objects, usually called an incidence relation. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related and 0 if they are not. There are variations; see below.
In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication. The set of all n × n matrices with entries in R is a matrix ring denoted Mn(R) (alternative notations: Matn(R) and Rn×n). Some sets of infinite matrices form infinite matrix rings. A subring of a matrix ring is again a matrix ring. Over a rng, one can form matrix rngs.
In mathematics, a unimodular matrixM is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse. Thus every equation Mx = b, where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over , which is denoted .
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another: Here, represents the element in the i-th row and j-th column of A, and the superscript denotes the transpose. Vectorization expresses, through coordinates, the isomorphism between these (i.e., of matrices and vectors) as vector spaces.
In the mathematics of binary relations, the composition of relations is the forming of a new binary relation R ; S from two given binary relations R and S. In the calculus of relations, the composition of relations is called relative multiplication, and its result is called a relative product. Function composition is the special case of composition of relations where all relations involved are functions.
In quantum computing, quantum finite automata (QFA) or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They provide a mathematical abstraction of real-world quantum computers. Several types of automata may be defined, including measure-once and measure-many automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFAs are, in turn, special cases of geometric finite automata or topological finite automata.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
In linear algebra, the computation of the permanent of a matrix is a problem that is thought to be more difficult than the computation of the determinant of a matrix despite the apparent similarity of the definitions.
The Gale–Ryser theorem is a result in graph theory and combinatorial matrix theory, two branches of combinatorics. It provides one of two known approaches to solving the bipartite realization problem, i.e. it gives a necessary and sufficient condition for two finite sequences of natural numbers to be the degree sequence of a labeled simple bipartite graph; a sequence obeying these conditions is called "bigraphic". It is an analog of the Erdős–Gallai theorem for simple graphs. The theorem was published independently in 1957 by H. J. Ryser and David Gale.
In discrete mathematics, the Bregman–Minc inequality, or Bregman's theorem, allows one to estimate the permanent of a binary matrix via its row or column sums. The inequality was conjectured in 1963 by Henryk Minc and first proved in 1973 by Lev M. Bregman. Further entropy-based proofs have been given by Alexander Schrijver and Jaikumar Radhakrishnan. The Bregman–Minc inequality is used, for example, in graph theory to obtain upper bounds for the number of perfect matchings in a bipartite graph.