Balanced matrix

Last updated

In mathematics, a balanced matrix is a 0-1 matrix (a matrix where every entry is either zero or one) that does not contain any square submatrix of odd order having all row sums and all column sums equal to 2.

Contents

Balanced matrices are studied in linear programming . The importance of balanced matrices comes from the fact that the solution to a linear programming problem is integral if its matrix of coefficients is balanced and its right hand side or its objective vector is an all-one vector. [1] [2] In particular, if one searches for an integral solution to a linear program of this kind, it is not necessary to explicitly solve an integer linear program, but it suffices to find an optimal vertex solution of the linear program itself.

As an example, the following matrix is a balanced matrix:

Characterization by forbidden submatrices

Equivalent to the definition, a 0-1 matrix is balanced if and only if it does not contain a submatrix that is the incidence matrix of any odd cycle (a cycle graph of odd order). [2]

Therefore, the only three by three 0-1 matrix that is not balanced is (up to permutation of the rows and columns) the following incidence matrix of a cycle graph of order 3:

The following matrix is the incidence matrix of a cycle graph of order 5:

The above characterization implies that any matrix containing or (or the incidence matrix of any other odd cycle) as a submatrix, is not balanced.

Connection to other matrix classes

Every balanced matrix is a perfect matrix.

More restricting than the notion of balanced matrices is the notion of totally balanced matrices. A 0-1 matrix is called totally balanced if it does not contain a square submatrix having no repeated columns and all row sums and all column sums equal to 2. Equivalently, a matrix is totally balanced if and only if it does not contain a submatrix that is the incidence matrix of any cycle (no matter if of odd or even order). This characterization immediately implies that any totally balanced matrix is balanced. [3]

Moreover, any 0-1 matrix that is totally unimodular is also balanced. The following matrix is a balanced matrix as it does not contain any submatrix that is the incidence matrix of an odd cycle:

Since this matrix is not totally unimodular (its determinant is -2), 0-1 totally unimodular matrices are a proper subset of balanced matrices. [2]

For example, balanced matrices arise as the coefficient matrix in special cases of the set partitioning problem. [4]

An alternative method of identifying some balanced matrices is through the subsequence count, where the subsequence count SC of any row s of matrix A is

SC = |{t | [asj = 1, aij = 0 for s < i < t, atj = 1], j = 1, ..., n}|

If a 0-1 matrix A has SC(s)  1 for all rows s = 1, ..., m, then A has a unique subsequence, is totally unimodular [4] and therefore also balanced. Note that this condition is sufficient but not necessary for A to be balanced. In other words, the 0-1 matrices with SC(s)  1 for all rows s = 1, ..., m are a proper subset of the set of balanced matrices.

As a more general notion, a matrix where every entry is either 0, 1 or -1 is called balanced if in every submatrix with two nonzero entries per row and column, the sum of the entries is a multiple of 4. [5]

Related Research Articles

In linear algebra, the determinant is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix A is denoted det(A), det A, or |A|. Geometrically, it can be viewed as the volume scaling factor of the linear transformation described by the matrix. This is also the signed volume of the n-dimensional parallelepiped spanned by the column or row vectors of the matrix. The determinant is positive or negative according to whether the linear transformation preserves or reverses the orientation of a real vector space.

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

Matrix multiplication Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows:

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices.

In mathematics, an incidence matrix is a matrix that shows the relationship between two classes of objects. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related and 0 if they are not. There are variations; see below.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication. The set of all n × n matrices with entries in R is a matrix ring denoted Mn(R). Some sets of infinite matrices form infinite matrix rings. Any subring of a matrix ring is a matrix ring. Over a rng, one can form matrix rngs.

In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows. The n-dimensional parallelotope spanned by the rows of an n×n Hadamard matrix has the maximum possible n-dimensional volume among parallelotopes spanned by vectors whose entries are bounded in absolute value by 1. Equivalently, a Hadamard matrix has maximal determinant among matrices with entries of absolute value less than or equal to 1 and so is an extremal solution of Hadamard's maximal determinant problem.

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or antihermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix A is skew-Hermitian if it satisfies the relation

In mathematics, a unimodular matrixM is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse. Thus every equation Mx = b, where M and b both have integer components and M is unimodular, has an integer solution. The unimodular matrices of order n form a group, which is denoted .

In linear algebra, a circulant matrix is a square matrix in which each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz matrix.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish mathematician Tadeusz Banachiewicz in 1938.

In linear algebra, the Hermite normal form is an analogue of reduced echelon form for matrices over the integers Z. Just as reduced echelon form can be used to solve problems about the solution to the linear system Ax=b where x is in Rn, the Hermite normal form can solve problems about the solution to the linear system Ax=b where this time x is restricted to have integer coordinates only. Other applications of the Hermite normal form include integer programming, cryptography, and abstract algebra.

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:

Hadamard's maximal determinant problem, named after Jacques Hadamard, asks for the largest determinant of a matrix with elements equal to 1 or −1. The analogous question for matrices with elements equal to 0 or 1 is equivalent since, as will be shown below, the maximal determinant of a {1,−1} matrix of size n is 2n−1 times the maximal determinant of a {0,1} matrix of size n−1. The problem was posed by Hadamard in the 1893 paper in which he presented his famous determinant bound and remains unsolved for matrices of general size. Hadamard's bound implies that {1, −1}-matrices of size n have determinant at most nn/2. Hadamard observed that a construction of Sylvester produces examples of matrices that attain the bound when n is a power of 2, and produced examples of his own of sizes 12 and 20. He also showed that the bound is only attainable when n is equal to 1, 2, or a multiple of 4. Additional examples were later constructed by Scarpis and Paley and subsequently by many other authors. Such matrices are now known as Hadamard matrices. They have received intensive study.

In graph theory, the matching polytope of a given graph is a geometric object representing the possible matchings in the graph. It is a convex polytope each of whose corners corresponds to a matching. It has great theoretical importance in the theory of matching.

References

  1. Berge, C. (1972). "Balanced matrices". Mathematical Programming. 2: 19–31. doi:10.1007/BF01584535. S2CID   41468611.
  2. 1 2 3 Alexander Schrijver (1998). Theory of Linear and Integer Programming . John Wiley & Sons. pp.  303–308. ISBN   978-0-471-98232-6.
  3. Hoffman, A.J.; Kolen, A.W.J.; Sakarovitch, M. (1982). "Totally-balanced and Greedy Matrices". SIAM Journal on Algebraic and Discrete Methods. BW (Series). 6 (4): 720–731. doi:10.1137/0606070.
  4. 1 2 Ryan, D. M.; Falkner, J. C. (1988). "On the integer properties of scheduling set partitioning models". European Journal of Operational Research. 35 (3): 442–456. doi:10.1016/0377-2217(88)90233-0.
  5. Conforti, Michele; Cornuéjols, Gérard; Vušković, Kristina (2006), "Balanced matrices" (PDF), Discrete Mathematics, 306 (19–20): 2411, doi:10.1016/j.disc.2005.12.033 A retrospective and tutorial.