In mathematics, the Paley construction is a method for constructing Hadamard matrices using finite fields. The construction was described in 1933 by the English mathematician Raymond Paley.
The Paley construction uses quadratic residues in a finite field GF(q) where q is a power of an odd prime number. There are two versions of the construction depending on whether q is congruent to 1 or 3 modulo 4.
Let q be a power of an odd prime. In the finite field GF(q) the quadratic character χ(a) indicates whether the element a is zero, a non-zero square, or a non-square:
For example, in GF(7) the non-zero squares are 1 = 12 = 62, 4 = 22 = 52, and 2 = 32 = 42. Hence χ(0) = 0, χ(1) = χ(2) = χ(4) = 1, and χ(3) = χ(5) = χ(6) = −1.
The Jacobsthal matrix Q for GF(q) is the q × q matrix with rows and columns indexed by elements of GF(q) such that the entry in row a and column b is χ(a − b). For example, in GF(7), if the rows and columns of the Jacobsthal matrix are indexed by the field elements 0, 1, 2, 3, 4, 5, 6, then
The Jacobsthal matrix has the properties QQT = qI − J and QJ = JQ = 0 where I is the q × q identity matrix and J is the q × q all 1 matrix. If q is congruent to 1 mod 4 then −1 is a square in GF(q) which implies that Q is a symmetric matrix. If q is congruent to 3 mod 4 then −1 is not a square, and Q is a skew-symmetric matrix. When q is a prime number and rows and columns are indexed by field elements in the usual 0, 1, 2, … order, Q is a circulant matrix. That is, each row is obtained from the row above by cyclic permutation.
If q is congruent to 3 mod 4 then
is a Hadamard matrix of size q + 1. Here j is the all-1 column vector of length q and I is the (q+1)×(q+1) identity matrix. The matrix H is a skew Hadamard matrix, which means it satisfies H + HT = 2I.
If q is congruent to 1 mod 4 then the matrix obtained by replacing all 0 entries in
with the matrix
and all entries ±1 with the matrix
is a Hadamard matrix of size 2(q + 1). It is a symmetric Hadamard matrix.
Applying Paley Construction I to the Jacobsthal matrix for GF(7), one produces the 8 × 8 Hadamard matrix,
For an example of the Paley II construction when q is a prime power rather than a prime number, consider GF(9). This is an extension field of GF(3) obtained by adjoining a root of an irreducible quadratic. Different irreducible quadratics produce equivalent fields. Choosing x2+x−1 and letting a be a root of this polynomial, the nine elements of GF(9) may be written 0, 1, −1, a, a+1, a−1, −a, −a+1, −a−1. The non-zero squares are 1 = (±1)2, −a+1 = (±a)2, a−1 = (±(a+1))2, and −1 = (±(a−1))2. The Jacobsthal matrix is
It is a symmetric matrix consisting of nine 3 × 3 circulant blocks. Paley Construction II produces the symmetric 20 × 20 Hadamard matrix,
1- 111111 111111 111111 -- 1-1-1- 1-1-1- 1-1-1- 11 1-1111 ----11 --11-- 1- --1-1- -1-11- -11--1 11 111-11 11---- ----11 1- 1---1- 1--1-1 -1-11- 11 11111- --11-- 11---- 1- 1-1--- -11--1 1--1-1 11 --11-- 1-1111 ----11 1- -11--1 --1-1- -1-11- 11 ----11 111-11 11---- 1- -1-11- 1---1- 1--1-1 11 11---- 11111- --11-- 1- 1--1-1 1-1--- -11--1 11 ----11 --11-- 1-1111 1- -1-11- -11--1 --1-1- 11 11---- ----11 111-11 1- 1--1-1 -1-11- 1---1- 11 --11-- 11---- 11111- 1- -11--1 1--1-1 1-1---.
The size of a Hadamard matrix must be 1, 2, or a multiple of 4. The Kronecker product of two Hadamard matrices of sizes m and n is an Hadamard matrix of size mn. By forming Kronecker products of matrices from the Paley construction and the 2 × 2 matrix,
Hadamard matrices of every permissible size up to 100 except for 92 are produced. In his 1933 paper, Paley says “It seems probable that, whenever m is divisible by 4, it is possible to construct an orthogonal matrix of order m composed of ±1, but the general theorem has every appearance of difficulty.” This appears to be the first published statement of the Hadamard conjecture. A matrix of size 92 was eventually constructed by Baumert, Golomb, and Hall, using a construction due to Williamson combined with a computer search. Currently, Hadamard matrices have been shown to exist for all for m < 668.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
In number theory, an integer q is called a quadratic residue modulo n if it is congruent to a perfect square modulo n; i.e., if there exists an integer x such that:
In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,
In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.
In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
The Cayley–Purser algorithm was a public-key cryptography algorithm published in early 1999 by 16-year-old Irishwoman Sarah Flannery, based on an unpublished work by Michael Purser, founder of Baltimore Technologies, a Dublin data security company. Flannery named it for mathematician Arthur Cayley. It has since been found to be flawed as a public-key algorithm, but was the subject of considerable media attention.
Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is the symmetric matrix that defines the quadratic form, and is any invertible matrix such that is diagonal, then the number of negative elements in the diagonal of is always the same, for all such ; and the same goes for the number of positive elements.
In mathematics, Paley graphs are undirected graphs constructed from the members of a suitable finite field by connecting pairs of elements that differ by a quadratic residue. The Paley graphs form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. Paley graphs allow graph-theoretic tools to be applied to the number theory of quadratic residues, and have interesting properties that make them useful in graph theory more generally.
In mathematics, a conference matrix (also called a C-matrix) is a square matrix C with 0 on the diagonal and +1 and −1 off the diagonal, such that CTC is a multiple of the identity matrix I. Thus, if the matrix has order n, CTC = (n−1)I. Some authors use a more general definition, which requires there to be a single 0 in each row and column but not necessarily on the diagonal.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
Hadamard's maximal determinant problem, named after Jacques Hadamard, asks for the largest determinant of a matrix with elements equal to 1 or −1. The analogous question for matrices with elements equal to 0 or 1 is equivalent since, as will be shown below, the maximal determinant of a {1,−1} matrix of size n is 2n−1 times the maximal determinant of a {0,1} matrix of size n−1. The problem was posed by Hadamard in the 1893 paper in which he presented his famous determinant bound and remains unsolved for matrices of general size. Hadamard's bound implies that {1, −1}-matrices of size n have determinant at most nn/2. Hadamard observed that a construction of Sylvester produces examples of matrices that attain the bound when n is a power of 2, and produced examples of his own of sizes 12 and 20. He also showed that the bound is only attainable when n is equal to 1, 2, or a multiple of 4. Additional examples were later constructed by Scarpis and Paley and subsequently by many other authors. Such matrices are now known as Hadamard matrices. They have received intensive study.