A Bohemian matrix family [1] is a set of matrices whose entries are members of a fixed, finite, and discrete set, referred to as the "population". The term "Bohemian" was first used to refer to matrices with entries consisting of integers of bounded height, hence the name, derived from the acronym BOunded HEight Matrix of Integers (BOHEMI). [2] The majority of published research on these matrix families studies populations of integers, although this is not strictly true of all possible Bohemian matrices. There is no single family of Bohemian matrices. Instead, a matrix can be said to be Bohemian with respect to a set from which its entries are drawn. Bohemian matrices may possess additional structure. For example, they may be Toeplitz matrices or upper Hessenberg matrices.
Bohemian matrices are used in software testing, particularly in linear algebra applications. They are often distinctly represented on computers [3] and are identifiable for extreme behavior through exhaustive search (for small dimensions), random sampling, or optimization techniques. Steven E. Thornton utilized these concepts to develop a tool that solved over two trillion eigenvalue problems, revealing instances of convergence failure in some popular software systems. [4]
The anymatrix toolbox is an extensible MATLAB matrix tool that provides a set of sorted Bohemian matrices and utilities for property-based queries of the set. [5]
In a presentation at the 2018 Bohemian Matrices and Applications Workshop, Nick Higham (co-author of the anymatrix toolbox) discussed how he used genetic algorithms on Bohemian matrices with population P={-1, 0, 1} to refine lower bounds on the maximal growth factor for rook pivoting. [6]
Bohemian matrices can be studied through random sampling, a process that intersects with the field of random matrices. However, the study of random matrices has predominantly focused on real symmetric or Hermitian matrices, or matrices with entries drawn from a continuous distribution, such as Gaussian ensembles. Notable exceptions to this focus include the work of Terence Tao and Van Vu. [7] [8]
The term Bernoulli matrices is sometimes used to describe matrices with entries constrained to ±1, [9] classifying them as Bohemian matrices. A Hadamard matrix is a Bernoulli matrix that satisfies an additional property, namely that its determinant is maximal. Hadamard matrices (and Bernoulli matrices) have been studied for far longer than the term "Bohemian matrix" has existed. The questions posed about Hadamard matrices, such as those concerning maximal determinants, can also be applied to other Bohemian matrices. One generalization of Hadamard matrices includes Butson-type Hadamard matrices, whose entries are qth roots of unity for q> 2, and can also be considered prototypical Bohemian matrices.
Matrices with discrete entries, particularly incidence matrices, play a crucial role in understanding graph theory. The results from graph theory research can elucidate phenomena observed in Bohemian matrix experiments. Conversely, experiments conducted using Bohemian matrices can provide valuable insights into graph-related problems. [10]
Several open problems listed in the Encyclopedia of Integer Sequences concerning Bohemian matrices are combinatoric in nature. For instance, A306782 lists a table of the number of distinct minimal polynomials for Bernoulli matrices (Bohemian matrices with entries ±1) up to dimension 5. The numbers for higher dimensions remain unknown. The number of valid Bernoulli matrices of dimension 6 is 236=68,719,476,736; while this set could be exhaustively searched (it is delightfully parallel), the greater-than-exponential growth of the number of matrices quickly grows beyond the limits of numerical analysis. There are symmetries that might be taken advantage of, as is done [10] for zero-one matrices, but these require sophisticated combinatorics knowledge.
Many number theorists have studied polynomials with restricted coefficients. For instance, Littlewood polynomials have coefficients ±1 in the monomial basis. Researchers such as Kurt Mahler, [11] Andrew Odlyzko, Bjorn Poonen [12] and Peter Borwein have contributed to this field. By using companion matrices, these polynomial problems with restricted coefficients can be framed as Bohemian matrix problems. However, the characteristic polynomial of a Bohemian matrix may have coefficients that are exponentially large in the matrix dimension, so the reverse transformation is not always applicable. [13] [14]
Connections to Magic Squares are explored in Kathleen Ollerenshaw's book with D. Brée. [15] Furthermore, Bohemian matrices are explicitly connected to quadratic forms in certain papers. [16]
To find the roots of a polynomial, one can construct a corresponding companion matrix and solve for its eigenvalues. These eigenvalues correspond to the roots of the original polynomial. This method is commonly used in NumPy's polynomial package and is generally numerically stable, [17] though it may occasionally struggle with polynomials that have large coefficients or are ill-conditioned.
Improving this situation involves finding a minimal height companion matrix for the polynomial within a Bohemian matrix family. [18] However, no efficient general-purpose techniques are currently known for this approach.
The term "Bohemian matrices" and the concept of categorizing problems in this manner first appeared in a publication by ISSAC in 2016. [19] The name originated from the mnemonic BOunded HEight Matrix of Integers (BOHEMI), although the classification has since been expanded to include other discrete populations, [20] such as Gaussian integers. The utility and scope of this categorization are becoming increasingly recognized, with the first significant journal publication [21] following smaller earlier publications. As of March 2022, several publications explicitly use the term "Bohemian matrices," in addition to those already cited in this article. [22] [23] [24] .
The inaugural workshop on Bohemian matrices was held in 2018 at the University of Manchester, titled "Bohemian Matrices and Applications." The concept is akin to the specialization suggested by George Pólya, titled "Bohemian Matrices and Applications." The concept is akin to the specialization suggested by Littlewood polynomial.
This concept shares similarities with sign pattern matrices, where two matrices with real entries are deemed equivalent if corresponding entries have the same sign. [25] A Bohemian matrix with the population P={-1, 0, 1} is an example of a sign pattern matrix and adheres to the defined properties but may also exhibit unique characteristics specific to its Bohemian nature.
In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
In mathematics, the special linear groupSL(n, R) of degree n over a commutative ring R is the set of n × n matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In linear algebra, a Jordan normal form, also known as a Jordan canonical form, is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.
In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In mathematics, an integer matrix is a matrix whose entries are all integers. Examples include binary matrices, the zero matrix, the matrix of ones, the identity matrix, and the adjacency matrices used in graph theory, amongst many others. Integer matrices find frequent application in combinatorics.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.
Joel Lee Brenner was an American mathematician who specialized in matrix theory, linear algebra, and group theory. He is known as the translator of several popular Russian texts. He was a teaching professor at some dozen colleges and universities and was a Senior Mathematician at Stanford Research Institute from 1956 to 1968. He published over one hundred scholarly papers, 35 with coauthors, and wrote book reviews.
In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or, typically "Noetherian integral domain".
In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.
In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks. Suppose A, C are respectively p×p, q×q positive-semidefinite complex matrices and B is a p×q complex matrix. Let
Wilson matrix is the following matrix having integers as elements: