Hadamard's inequality

Last updated

In mathematics, Hadamard's inequality (also known as Hadamard's theorem on determinants [1] ) is a result first published by Jacques Hadamard in 1893. [2] It is a bound on the determinant of a matrix whose entries are complex numbers in terms of the lengths of its column vectors. In geometrical terms, when restricted to real numbers, it bounds the volume in Euclidean space of n dimensions marked out by n vectors vi for 1 ≤ in in terms of the lengths of these vectors ||vi||.

Contents

Specifically, Hadamard's inequality states that if N is the matrix having columns [3] vi, then

If the n vectors are non-zero, equality in Hadamard's inequality is achieved if and only if the vectors are orthogonal.

Alternate forms and corollaries

A corollary is that if the entries of an n by n matrix N are bounded by B, so |Nij| ≤ B for all i and j, then

In particular, if the entries of N are +1 and −1 only then [4]

In combinatorics, matrices N for which equality holds, i.e. those with orthogonal columns, are called Hadamard matrices.

More generally, suppose that N is a complex matrix of order n, whose entries are bounded by |Nij| ≤ 1, for each i, j between 1 and n. Then Hadamard's inequality states that

Equality in this bound is attained for a real matrix N if and only if N is a Hadamard matrix.

A positive-semidefinite matrix P can be written as N*N, where N* denotes the conjugate transpose of N (see Decomposition of a semidefinite matrix). Then

So, the determinant of a positive definite matrix is less than or equal to the product of its diagonal entries. Sometimes this is also known as Hadamard's inequality. [2] [5]

Proof

The result is trivial if the matrix N is singular, so assume the columns of N are linearly independent. By dividing each column by its length, it can be seen that the result is equivalent to the special case where each column has length 1, in other words if ei are unit vectors and M is the matrix having the ei as columns then

 

 

 

 

(1)

and equality is achieved if and only if the vectors are an orthogonal set. The general result now follows:

To prove (1), consider P =M*M and let the eigenvalues of P be λ1, λ2, … λn. Since the length of each column of M is 1, each entry in the diagonal of P is 1, so the trace of P is n. Applying the inequality of arithmetic and geometric means,

so

If there is equality then each of the λi's must all be equal and their sum is n, so they must all be 1. The matrix P is Hermitian, therefore diagonalizable, so it is the identity matrix—in other words the columns of M are an orthonormal set and the columns of N are an orthogonal set. [6] Many other proofs can be found in the literature.

See also

Notes

  1. "Hadamard theorem - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved 2020-06-15.
  2. 1 2 Maz'ya & Shaposhnikova
  3. The result is sometimes stated in terms of row vectors. That this is equivalent is seen by applying the transpose.
  4. Garling
  5. Różański, Michał; Wituła, Roman; Hetmaniok, Edyta (2017). "More subtle versions of the Hadamard inequality". Linear Algebra and Its Applications. 532: 500–511. doi: 10.1016/j.laa.2017.07.003 .
  6. Proof follows, with minor modifications, the second proof given in Maz'ya & Shaposhnikova.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Simplex</span> Multi-dimensional generalization of triangle

In geometry, a simplex is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. For example,

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In linear algebra, the adjugate of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix

In mathematics, specifically linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of transpose shapes. It generalizes the statement that the determinant of a product of square matrices is equal to the product of their determinants. The formula is valid for matrices with the entries from any commutative ring.

<span class="mw-page-title-main">Hadamard matrix</span> Mathematics concept

In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.

In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms.

<span class="mw-page-title-main">Lattice reduction</span>

In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In mathematics, a Cauchy matrix, named after Augustin-Louis Cauchy, is an m×n matrix with elements aij in the form

In geometry, the polar sine generalizes the sine function of angle to the vertex angle of a polytope. It is denoted by psin.

In linear algebra, a Moore matrix, introduced by E. H. Moore, is a matrix defined over a finite field. When it is a square matrix its determinant is called a Moore determinant. The Moore matrix has successive powers of the Frobenius automorphism applied to its columns, so it is an m × n matrix

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

<span class="mw-page-title-main">Hadamard product (matrices)</span> Matrix operation

In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.

In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks. Suppose A, C are respectively p×p, q×q positive-semidefinite complex matrices and B is a p×q complex matrix. Let

References

Further reading