Williamson conjecture

Last updated

In combinatorial mathematics, specifically in combinatorial design theory and combinatorial matrix theory the Williamson conjecture is that Williamson matrices of order exist for all positive integers . Four symmetric and circulant matrices , , , are known as Williamson matrices if their entries are and they satisfy the relationship

where is the identity matrix of order . John Williamson showed that if , , , are Williamson matrices then

is an Hadamard matrix of order . [1] It was once considered likely that Williamson matrices exist for all orders and that the structure of Williamson matrices could provide a route to proving the Hadamard conjecture that Hadamard matrices exist for all orders . [2] However, in 1993 the Williamson conjecture was shown to be false via an exhaustive computer search by Dragomir Ž. Ðoković, who showed that Williamson matrices do not exist in order . [3] In 2008, the counterexamples 47, 53, and 59 were additionally discovered. [4]

Related Research Articles

<span class="mw-page-title-main">Latin square</span> Square array with symbols that each occur once per row and column

In combinatorics and in experimental design, a Latin square is an n × n array filled with n different symbols, each occurring exactly once in each row and exactly once in each column. An example of a 3×3 Latin square is

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Hadamard matrix</span> Mathematics concept

In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In mathematics, a matrix of ones or all-ones matrix is a matrix with every entry equal to one. For example:

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist.

<span class="mw-page-title-main">Quantum logic gate</span> Basic circuit in quantum computing

In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.

In mathematics, an alternating sign matrix is a square matrix of 0s, 1s, and −1s such that the sum of each row and column is 1 and the nonzero entries in each row and column alternate in sign. These matrices generalize permutation matrices and arise naturally when using Dodgson condensation to compute a determinant. They are also closely related to the six-vertex model with domain wall boundary conditions from statistical mechanics. They were first defined by William Mills, David Robbins, and Howard Rumsey in the former context.

In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,

<span class="mw-page-title-main">Weighing matrix</span> Mathematical weight device

In mathematics, a weighing matrix of order and weight is a matrix with entries from the set such that:

A complex Hadamard matrix is any complex matrix satisfying two conditions:

In mathematics, a complex Hadamard matrix H of size N with all its columns (rows) mutually orthogonal, belongs to the Butson-typeH(qN) if all its elements are powers of q-th root of unity,

In mathematics, the Paley construction is a method for constructing Hadamard matrices using finite fields. The construction was described in 1933 by the English mathematician Raymond Paley.

In mathematics, and in particular, algebra, a generalized inverse of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix .

In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another: Here, represents the element in the i-th row and j-th column of A, and the superscript denotes the transpose. Vectorization expresses, through coordinates, the isomorphism between these (i.e., of matrices and vectors) as vector spaces.

In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.

Hadamard's maximal determinant problem, named after Jacques Hadamard, asks for the largest determinant of a matrix with elements equal to 1 or −1. The analogous question for matrices with elements equal to 0 or 1 is equivalent since, as will be shown below, the maximal determinant of a {1,−1} matrix of size n is 2n−1 times the maximal determinant of a {0,1} matrix of size n−1. The problem was posed by Hadamard in the 1893 paper in which he presented his famous determinant bound and remains unsolved for matrices of general size. Hadamard's bound implies that {1, −1}-matrices of size n have determinant at most nn/2. Hadamard observed that a construction of Sylvester produces examples of matrices that attain the bound when n is a power of 2, and produced examples of his own of sizes 12 and 20. He also showed that the bound is only attainable when n is equal to 1, 2, or a multiple of 4. Additional examples were later constructed by Scarpis and Paley and subsequently by many other authors. Such matrices are now known as Hadamard matrices. They have received intensive study.

<span class="mw-page-title-main">Hadamard product (matrices)</span> Elementwise product of two matrices

In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.

References

  1. Williamson, John (1944). "Hadamard's determinant theorem and the sum of four squares". Duke Mathematical Journal . 11 (1): 65–81. doi:10.1215/S0012-7094-44-01108-7. MR   0009590.
  2. Golomb, Solomon W.; Baumert, Leonard D. (1963). "The Search for Hadamard Matrices". American Mathematical Monthly . 70 (1): 12–17. doi:10.2307/2312777. JSTOR   2312777. MR   0146195.
  3. Ðoković, Dragomir Ž. (1993). "Williamson matrices of order for ". Discrete Mathematics . 115 (1): 267–271. doi: 10.1016/0012-365X(93)90495-F . MR   1217635.
  4. Holzmann, W. H.; Kharaghani, H.; Tayfeh-Rezaie, B. (2008). "Williamson matrices up to order 59". Designs, Codes and Cryptography. 46 (3): 343–352. doi:10.1007/s10623-007-9163-5. MR   2372843.