Circular ensemble

Last updated

In the theory of random matrices, the circular ensembles are measures on spaces of unitary matrices introduced by Freeman Dyson as modifications of the Gaussian matrix ensembles. [1] The three main examples are the circular orthogonal ensemble (COE) on symmetric unitary matrices, the circular unitary ensemble (CUE) on unitary matrices, and the circular symplectic ensemble (CSE) on self dual unitary quaternionic matrices.

Contents

Probability distributions

The distribution of the unitary circular ensemble CUE(n) is the Haar measure on the unitary group U(n). If U is a random element of CUE(n), then UTU is a random element of COE(n); if U is a random element of CUE(2n), then URU is a random element of CSE(n), where

Each element of a circular ensemble is a unitary matrix, so it has eigenvalues on the unit circle: with for k=1,2,... n, where the are also known as eigenangles or eigenphases. In the CSE each of these n eigenvalues appears twice. The distributions have densities with respect to the eigenangles, given by

on (symmetrized version), where β=1 for COE, β=2 for CUE, and β=4 for CSE. The normalisation constant Zn,β is given by

as can be verified via Selberg's integral formula, or Weyl's integral formula for compact Lie groups.

Generalizations

Generalizations of the circular ensemble restrict the matrix elements of U to real numbers [so that U is in the orthogonal group O(n)] or to real quaternion numbers [so that U is in the symplectic group Sp(2n). The Haar measure on the orthogonal group produces the circular real ensemble (CRE) and the Haar measure on the symplectic group produces the circular quaternion ensemble (CQE).

The eigenvalues of orthogonal matrices come in complex conjugate pairs and , possibly complemented by eigenvalues fixed at +1 or -1. For n=2m even and det U=1, there are no fixed eigenvalues and the phases θk have probability distribution [2]

with C an unspecified normalization constant. For n=2m+1 odd there is one fixed eigenvalue σ=det U equal to ±1. The phases have distribution

For n=2m+2 even and det U=-1 there is a pair of eigenvalues fixed at +1 and -1, while the phases have distribution

This is also the distribution of the eigenvalues of a matrix in Sp(2m).

These probability density functions are referred to as Jacobi distributions in the theory of random matrices, because correlation functions can be expressed in terms of Jacobi polynomials.

Calculations

Averages of products of matrix elements in the circular ensembles can be calculated using Weingarten functions. For large dimension of the matrix these calculations become impractical, and a numerical method is advantageous. There exist efficient algorithms to generate random matrices in the circular ensembles, for example by performing a QR decomposition on a Ginibre matrix. [3]

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, an invertible complex square matrix U is unitary if its matrix inverse U−1 equals its conjugate transpose U*, that is, if

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then

<span class="mw-page-title-main">Bloch sphere</span> Geometrical representation of the pure state space of a two-level quantum mechanical system

In quantum mechanics and computing, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch.

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

<span class="mw-page-title-main">Euler's rotation theorem</span> Movement with a fixed point is rotation

In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.

In mathematics, the Cayley transform, named after Arthur Cayley, is any of a cluster of related things. As originally described by Cayley (1846), the Cayley transform is a mapping between skew-symmetric matrices and special orthogonal matrices. The transform is a homography used in real analysis, complex analysis, and quaternionic analysis. In the theory of Hilbert spaces, the Cayley transform is a mapping between linear operators.

The classical XY model is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for n = 2.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

In mathematics, the group of rotations about a fixed point in four-dimensional Euclidean space is denoted SO(4). The name comes from the fact that it is the special orthogonal group of order 4.

The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.

In matrix theory, Sylvester's formula or Sylvester's matrix theorem (named after J. J. Sylvester) or Lagrange−Sylvester interpolation expresses an analytic function f(A) of a matrix A as a polynomial in A, in terms of the eigenvalues and eigenvectors of A. It states that

References

  1. F.M. Dyson (1962). "The threefold way. Algebraic structure of symmetry groups and ensembles in quantum mechanics". Journal of Mathematical Physics. 3 (6): 1199. Bibcode:1962JMP.....3.1199D. doi:10.1063/1.1703863.
  2. V.L. Girko (1985). "Distribution of eigenvalues and eigenvectors of orthogonal random matrices". Ukrainian Mathematical Journal. 37 (5): 457. doi:10.1007/bf01061167. S2CID   120597749.
  3. F. Mezzadri (2007). "How to generate random matrices from the classical compact groups" (PDF). Notices of the AMS. 54: 592. arXiv: math-ph/0609050 . Bibcode:2006math.ph...9050M.

Software Implementations