Bohemian matrices

Last updated

A Bohemian matrix family [1] is a set of fixed, finite, matrices population. The term was first used to refer to matrices whose entries are integers of bounded height, hence the name, derived from the acronym BOunded Height Matrix of Integers (BOHEMI). [2] Since then, other populations have been studied. There is no single family of Bohemian matrices. Instead, a matrix can be said to be Bohemian with respect to a set from which its entries are drawn. Bohemian matrices may possess additional structure. For example, they may be Toeplitz matrices or upper Hessenberg matrices.

Contents

A density plot of all eigenvalues of all dimensions in 15x15 Bohemian upper Hessenberg zero diagonal matrices with population cube roots of -1 Exhaustivepop CubeRootsOfMinusOne cividis 15N4782969.png
A density plot of all eigenvalues of all dimensions in 15x15 Bohemian upper Hessenberg zero diagonal matrices with population cube roots of -1

Applications

Software testing

One of the applications of Bohemian matrices is in software testing using linear algebra. Bohemian matrices are typically distinctly represented on computers, [3] and are identifiable for extreme behavior either by exhaustive search (for small dimensions), random sampling, or optimization techniques.

Steven E. Thornton used this concept to develop a tool that solved over two trillion eigenvalue problems. In doing so, he uncovered instances of convergence failure in some popular software systems. [4]

An anymatrix is an extensible MATLAB matrix collection capturing a set of optimal classes of Bohemian matrices.

Improved bounds

In a talk given at the 2018 Bohemian Matrices and Applications Workshop, Nick Higham explained how he utilized genetic algorithms on Bohemian matrices with population P={-1, 0, 1} to sharpen lower bounds on the maximal growth factor for rook pivoting. [5]

Connections to other fields

Random matrices

Bohemian matrices can be studied through random sampling. Such studies might be considered part of the field of random matrices. However, the study of random matrices to date has mostly focused on real symmetric or Hermitian matrices, or on matrices whose entries are drawn from a continuous distribution (such as Gaussian ensembles). Notable exceptions include the work of Terence Tao and Van Vu. [6] [7]

Bernoulli and Hadamard matrices

Matrices with entries only from ±1 are sometimes called Bernoulli matrices [8] and are therefore examples of Bohemian matrices. A Hadamard matrix is a Bernoulli matrix that satisfies a further property, namely that its determinant is maximal. Therefore, Hadamard matrices are also Bohemian. Hadamard matrices (and indeed Bernoulli matrices) have been studied for much longer than the name "Bohemian matrix" has existed, but the kinds of questions one can ask about Hadamard matrices (the original question was which matrices have maximal determinant) can also be asked about other Bohemian matrices. One of the first generalizations of Hadamard matrices was to what are sometimes called Butson-type Hadamard matrices, whose entries are qth roots of unity for q> 2. These can also be considered prototypical Bohemian matrices.

Graph theory

Matrices with discrete entries, particularly incidence matrices, play a crucial role in understanding graph theory. The outcomes of graph theory aid in explaining certain phenomena encountered in Bohemian matrix experiments. Conversely, experiments conducted in the manner advocated by Bohemian matrix workers can provide useful information for graphs. [9]

Combinatorics

Numerous open problems listed in the Encyclopedia of Integer Sequences under Bohemian matrices are combinatoric in nature. For instance, A306782 lists a table of the number of distinct minimal polynomials for Bernoulli matrices (that is, Bohemian matrices with population ±1) up to and including dimension 5. The numbers for higher dimensions are not known. The number of Bernoulli matrices of dimension 6 is 236=68,719,476,736; while this set could likely be searched exhaustively (it is delightfully parallel), the greater-than-exponential growth of the number of matrices soon defeats brute force. There are symmetries that might be taken advantage of, as is done [9] for zero-one matrices, but these require sophisticated combinatorics knowledge.

Number theory

Number theorists have worked on polynomials with restricted coefficients over the years. For instance, Littlewood polynomials have coefficients ±1 when expressed in the monomial basis. Kurt Mahler was concerned with the problem, [10] as were Andrew Odlyzko, Bjorn Poonen [11] and Peter Borwein. [12]

By using companion matrices, each of these restricted-coefficient polynomial problems can be considered to be a Bohemian matrix problem. However, the characteristic polynomial of a Bohemian matrix might have coefficients that are exponentially large in the dimension, so the converse is not true and these areas of study are not equivalent. [13]

Connections to Magic Squares are explored in Kathleen Ollerenshaw's book with D. Brée. [14] The following paper explicitly connects Bohemian matrices to quadratic forms. [15]

Solution of polynomial equations

A commonly-used technique to find polynomial roots is to construct a companion matrix for it and use numerical eigenvalue solvers to find the eigenvalues, which are then the roots of the original polynomial. While seemingly counter-intuitive and wasteful, this technique can be practical. This is the default method of NumPy's Polynomial package. The method is frequently numerically stable, [16] but occasionally does not work well if the polynomial coefficients are large or the polynomial is otherwise ill-conditioned.

It is possible to improve the situation by finding a minimal height companion matrix for the polynomial by looking for such in a Bohemian matrix family. [17] However, no efficient general-purpose techniques are known.

History

The name "Bohemian matrices", together with the idea that it might be useful to categorize problems in this way, appeared in print by ISSAC 2016. [18] The name arises from the mnemonic BOunded HEight Matrix of Integers (BOHEMI), although the classification has since been extended to other discrete populations including non-integer populations [19] (including for example Gaussian integers). The breadth and utility of making this categorization is becoming increasingly apparent: the first significant journal publication was [20] although smaller publications appeared before that. As of March 2022, publications explicitly using the name, aside from those cited elsewhere in this article, include [21] [22] [23]

The first workshop on the subject was held in 2018 at the University of Manchester and was entitled Bohemian Matrices and Applications. The idea can be considered to be a specialization in the sense of George Pólya, in that Bohemian matrix families are restricted to having certain entries and are thus special matrices. The idea can also be considered a generalization, as instead of merely having entries either 0 or 1 as in an incidence matrix of a graph or -1 and 1 as in the companion matrix of a Littlewood polynomial, the Bohemian family can have entries that are otherwise bounded, discrete, or (usually) both.

The idea is somewhat similar to that of sign pattern matrices, in which two matrices with real entries are considered equivalent if the corresponding entries have the same sign, and that theory looks for common properties. [24] A Bohemian matrix with population P={-1, 0, 1} is an example of a sign pattern matrix, and so it has all the properties of such matrices, but it may also have special properties owing to its Bohemian nature.

Related Research Articles

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

<span class="mw-page-title-main">Special linear group</span> Group of matrices with determinant 1

In mathematics, the special linear groupSL(n, R) of degree n over a commutative ring R is the set of n × n matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

<span class="mw-page-title-main">Hadamard matrix</span> Mathematics concept

In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In mathematics, a unimodular matrixM is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse. Thus every equation Mx = b, where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over , which is denoted .

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, an integer matrix is a matrix whose entries are all integers. Examples include binary matrices, the zero matrix, the matrix of ones, the identity matrix, and the adjacency matrices used in graph theory, amongst many others. Integer matrices find frequent application in combinatorics.

In linear algebra, the Hermite normal form is an analogue of reduced echelon form for matrices over the integers Z. Just as reduced echelon form can be used to solve problems about the solution to the linear system Ax=b where x is in Rn, the Hermite normal form can solve problems about the solution to the linear system Ax=b where this time x is restricted to have integer coordinates only. Other applications of the Hermite normal form include integer programming, cryptography, and abstract algebra.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.

Joel Lee Brenner was an American mathematician who specialized in matrix theory, linear algebra, and group theory. He is known as the translator of several popular Russian texts. He was a teaching professor at some dozen colleges and universities and was a Senior Mathematician at Stanford Research Institute from 1956 to 1968. He published over one hundred scholarly papers, 35 with coauthors, and wrote book reviews.

In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.

In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks. Suppose A, C are respectively p×p, q×q positive-semidefinite complex matrices and B is a p×q complex matrix. Let

Wilson matrix is the following matrix having integers as elements:

References

  1. Higham, Nicholas J (December 2018). "Rhapsodizing about Bohemian Matrices". Society for Industrial and Applied Mathematics. Archived from the original on 28 February 2022. Retrieved 1 March 2022.
  2. Corless, Robert; Labahn, George; Piponi, Dan; Rafiee Sevyeri, Leili (2022-07-05). "Bohemian Matrix Geometry". Proceedings of the 2022 International Symposium on Symbolic and Algebraic Computation. ISSAC '22. New York, NY, USA: Association for Computing Machinery: 361–370. arXiv: 2202.07769 . doi:10.1145/3476446.3536177. ISBN   978-1-4503-8688-3.
  3. Higham, Nicholas J. "Bohemian Matrices in Numerical Linear Algebra" (PDF). Nick HighamConferences. Archived from the original (PDF) on 13 November 2020. Retrieved 2 March 2022.
  4. Thornton, Steven E (April 2019). Algorithms for Bohemian Matrices (PhD thesis). Western University. Archived from the original on 7 March 2022. Retrieved 1 March 2022.
  5. Higham, Nicholas J. "Bohemian Matrices in Numerical Linear Algebra" (PDF). Nick HighamConferences. Archived from the original (PDF) on 13 November 2020. Retrieved 2 March 2022.
  6. Tao, Terence; Vu, Van (January 2006). "On random ±1 matrices: Singularity and determinant". Random Structures and Algorithms. 28 (1): 1–23. arXiv: math/0411095 . doi:10.1002/rsa.20109. S2CID   5361802 . Retrieved 2 March 2022.
  7. Vu, Van (2008). "Random Discrete Matrices". Horizons of Combinatorics. Bolyai Society Mathematical Studies. Vol. 17. pp. 257–289. arXiv: math/0611321 . doi:10.1007/978-3-540-77200-2_13. ISBN   978-3-540-77199-9. S2CID   118703720.
  8. Tao, Terence; Vu, Van (January 2006). "On random ±1 matrices: Singularity and determinant". Random Structures and Algorithms. 28 (1): 1–23. arXiv: math/0411095 . doi:10.1002/rsa.20109. S2CID   5361802 . Retrieved 2 March 2022.
  9. 1 2 Živković, Miodrag (2006). "Classification of small (0, 1) matrices". Linear Algebra and Its Applications. 414 (1): 310–346. arXiv: math/0511636 . doi: 10.1016/j.laa.2005.10.010 .
  10. Mahler, Kurt (1963). "On two extremum properties of polynomials". Illinois Journal of Mathematics. 7 (4): 681–701. doi: 10.1215/ijm/1255645104 . S2CID   118793107.
  11. Odlyzko, Andrew (September 1992). "Zeros of polynomials with 0,1 coefficients". Algorithms Seminar: 169. CiteSeerX   10.1.1.47.9327 .
  12. Borwein, Peter B.; Jörgenson, Loki (December 2001). "Visible Structures in Number Theory". American Mathematical Monthly. 108 (10): 897–910. doi:10.1080/00029890.2001.11919824. JSTOR   2695413. S2CID   454318 . Retrieved 3 March 2022.
  13. Calkin, Neil J; Chan, Eunice Y.S.; Corless, Robert M. (2 June 2021). "Some facts and conjectures about Mandelbrot polynomials". Maple Transactions. 1 (1): 13. doi: 10.5206/mt.v1i1.14037 . S2CID   242158547.
  14. Ollerenshaw, Kathleen; Brée, D (1998). Most-perfect pandiagonal magic squares. IMA.
  15. Higham, Nicholas J; Lettington, Matthew (2022). "Optimizing and Factorizing the Wilson Matrix". The American Mathematical Monthly. 129 (5): 454–465. doi:10.1080/00029890.2022.2038006. S2CID   233322415. Archived from the original on 3 March 2022. Retrieved 3 March 2022.
  16. Edelman, Alan; Murakami, H (April 1995). "Polynomial roots from companion matrix eigenvalues" (PDF). Mathematics of Computation. 64 (210): 763–776. Bibcode:1995MaCom..64..763E. doi:10.1090/S0025-5718-1995-1262279-2. Archived (PDF) from the original on 24 January 2022. Retrieved 2 March 2022.
  17. Chan, Eunice Y.S.; Corless, Robert M. (6 February 2017). "A new kind of companion matrix". Electronic Journal of Linear Algebra. 32: 335–342. doi: 10.13001/1081-3810.3400 .
  18. Corless, Robert M.; Thornton, Steven E. (2017). "The Bohemian Eigenvalue Project". ACM Communications in Computer Algebra. 50 (4): 158–160. doi:10.1145/3055282.3055289. S2CID   34583673. Archived from the original on 1 March 2022. Retrieved 28 February 2022.
  19. Corless, Robert (2 June 2021). "What can we learn from Bohemian Matrices". Maple Transactions. 1 (1). doi: 10.5206/mt.v1i1.14039 . S2CID   241595165.
  20. Chan, Eunice Y.S.; Corless, Robert M.; Gonzalez-Vega, Laureano; Sendra, J. Rafael; Sendra, Juana; Thornton, Steven E. (September 2020). "Upper Hessenberg and Toeplitz Bohemians". Linear Algebra and Its Applications. 601: 72–100. arXiv: 1907.10677 . doi:10.1016/j.laa.2020.03.037. S2CID   198899515. Archived from the original on 13 August 2023. Retrieved 3 March 2022.
  21. Fasi, Massimiliano; Negri Porzio, Gian Maria (2020). "Determinants of normalized Bohemian upper Hessenberg matrices". The Electronic Journal of Linear Algebra. 36 (36): 352–366. doi: 10.13001/ela.2020.5053 . S2CID   191136476.
  22. Chan, Eunice Y.S.; Corless, Robert M.; Gonzalez-Vega, Laureano; Sendra, J. Rafael; Sendra, Juana (May 2022). "Inner Bohemian Inverses". Applied Mathematics and Computation. 421 (15): 126945. doi: 10.1016/j.amc.2022.126945 . S2CID   246318540.
  23. Bogoya, Manuel; Serra-Capizzano, Stefano; Trotti, Ken (2022). "Upper Hessenberg and Toeplitz Bohemian matrix sequences: a note on their asymptotical eigenvalues and singular values" (PDF). Electronic Transactions on Numerical Analysis. 55: 76–91. doi:10.1553/etna_vol55s76. S2CID   243772914. Archived (PDF) from the original on 3 March 2022. Retrieved 3 March 2022.
  24. Hall, Frank J; Li, Zhongshan (2013). Hogben, Leslie (ed.). Sign Pattern Matrices (2nd ed.). Handbook of Linear Algebra: CRC Press. pp. 42-1–42-32.