Generalizations of Pauli matrices

Last updated

In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices. Here, a few classes of such matrices are summarized.

Contents

Multi-qubit Pauli matrices (Hermitian)

This method of generalizing the Pauli matrices refers to a generalization from a single 2-level system (qubit) to multiple such systems. In particular, the generalized Pauli matrices for a group of qubits is just the set of matrices generated by all possible products of Pauli matrices on any of the qubits. [1]

The vector space of a single qubit is and the vector space of qubits is . We use the tensor product notation

to refer to the operator on that acts as a Pauli matrix on the th qubit and the identity on all other qubits. We can also use for the identity, i.e., for any we use . Then the multi-qubit Pauli matrices are all matrices of the form

,

i.e., for a vector of integers between 0 and 4. Thus there are such generalized Pauli matrices if we include the identity and if we do not.

Higher spin matrices (Hermitian)

The traditional Pauli matrices are the matrix representation of the Lie algebra generators , , and in the 2-dimensional irreducible representation of SU(2), corresponding to a spin-1/2 particle. These generate the Lie group SU(2).

For a general particle of spin , one instead utilizes the -dimensional irreducible representation.

Generalized Gell-Mann matrices (Hermitian)

This method of generalizing the Pauli matrices refers to a generalization from 2-level systems (Pauli matrices acting on qubits) to 3-level systems (Gell-Mann matrices acting on qutrits) and generic -level systems (generalized Gell-Mann matrices acting on qudits).

Construction

Let be the matrix with 1 in the jk-th entry and 0 elsewhere. Consider the space of complex matrices, , for a fixed .

Define the following matrices,

and

The collection of matrices defined above without the identity matrix are called the generalized Gell-Mann matrices, in dimension . [2] [3] The symbol ⊕ (utilized in the Cartan subalgebra above) means matrix direct sum.

The generalized Gell-Mann matrices are Hermitian and traceless by construction, just like the Pauli matrices. One can also check that they are orthogonal in the Hilbert–Schmidt inner product on . By dimension count, one sees that they span the vector space of complex matrices, . They then provide a Lie-algebra-generator basis acting on the fundamental representation of .

In dimensions = 2 and 3, the above construction recovers the Pauli and Gell-Mann matrices, respectively.

Sylvester's generalized Pauli matrices (non-Hermitian)

A particularly notable generalization of the Pauli matrices was constructed by James Joseph Sylvester in 1882. [4] These are known as "Weyl–Heisenberg matrices" as well as "generalized Pauli matrices". [5] [6]

Framing

The Pauli matrices and satisfy the following:

The so-called Walsh–Hadamard conjugation matrix is

Like the Pauli matrices, is both Hermitian and unitary. and satisfy the relation

The goal now is to extend the above to higher dimensions, .

Construction: The clock and shift matrices

Fix the dimension as before. Let , a root of unity. Since and , the sum of all roots annuls:

Integer indices may then be cyclically identified mod d.

Now define, with Sylvester, the shift matrix

and the clock matrix,

These matrices generalize and , respectively.

Note that the unitarity and tracelessness of the two Pauli matrices is preserved, but not Hermiticity in dimensions higher than two. Since Pauli matrices describe quaternions, Sylvester dubbed the higher-dimensional analogs "nonions", "sedenions", etc.

These two matrices are also the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces [7] [8] [9] as formulated by Hermann Weyl, and they find routine applications in numerous areas of mathematical physics. [10] The clock matrix amounts to the exponential of position in a "clock" of hours, and the shift matrix is just the translation operator in that cyclic vector space, so the exponential of the momentum. They are (finite-dimensional) representations of the corresponding elements of the Weyl-Heisenberg group on a -dimensional Hilbert space.

The following relations echo and generalize those of the Pauli matrices:

and the braiding relation,

the Weyl formulation of the CCR, and can be rewritten as

On the other hand, to generalize the Walsh–Hadamard matrix , note

Define, again with Sylvester, the following analog matrix, [11] still denoted by in a slight abuse of notation,

It is evident that is no longer Hermitian, but is still unitary. Direct calculation yields

which is the desired analog result. Thus, , a Vandermonde matrix, arrays the eigenvectors of , which has the same eigenvalues as .

When , is precisely the discrete Fourier transform matrix, converting position coordinates to momentum coordinates and vice versa.

Definition

The complete family of unitary (but non-Hermitian) independent matrices is defined as follows:

This provides Sylvester's well-known trace-orthogonal basis for , known as "nonions" , "sedenions" , etc... [12] [13]

This basis can be systematically connected to the above Hermitian basis. [14] (For instance, the powers of , the Cartan subalgebra, map to linear combinations of the matrices.) It can further be used to identify , as , with the algebra of Poisson brackets.

Properties

With respect to the Hilbert–Schmidt inner product on operators, , Sylvester's generalized Pauli operators are orthogonal and normalized to :

.

This can be checked directly from the above definition of .

See also

Notes

  1. Brown, Adam R.; Susskind, Leonard (2018-04-25). "Second law of quantum complexity". Physical Review D. 97 (8): 086015. arXiv: 1701.01107 . Bibcode:2018PhRvD..97h6015B. doi:10.1103/PhysRevD.97.086015. S2CID   119199949.
  2. Kimura, G. (2003). "The Bloch vector for N-level systems". Physics Letters A. 314 (5–6): 339–349. arXiv: quant-ph/0301152 . Bibcode:2003PhLA..314..339K. doi:10.1016/S0375-9601(03)00941-1. S2CID   119063531.
  3. Bertlmann, Reinhold A.; Philipp Krammer (2008-06-13). "Bloch vectors for qudits". Journal of Physics A: Mathematical and Theoretical. 41 (23): 235303. arXiv: 0806.1174 . Bibcode:2008JPhA...41w5303B. doi:10.1088/1751-8113/41/23/235303. ISSN   1751-8121. S2CID   118603188.
  4. Sylvester, J. J., (1882), Johns Hopkins University CircularsI: 241-242; ibid II (1883) 46; ibid III (1884) 7–9. Summarized in The Collected Mathematics Papers of James Joseph Sylvester (Cambridge University Press, 1909) v III . online and further.
  5. Appleby, D. M. (May 2005). "Symmetric informationally complete–positive operator valued measures and the extended Clifford group". Journal of Mathematical Physics. 46 (5): 052107. arXiv: quant-ph/0412001 . Bibcode:2005JMP....46e2107A. doi:10.1063/1.1896384. ISSN   0022-2488.
  6. Howard, Mark; Vala, Jiri (2012-08-15). "Qudit versions of the qubit π / 8 gate". Physical Review A. 86 (2): 022316. arXiv: 1206.1598 . Bibcode:2012PhRvA..86b2316H. doi:10.1103/PhysRevA.86.022316. ISSN   1050-2947. S2CID   56324846.
  7. Weyl, H., "Quantenmechanik und Gruppentheorie", Zeitschrift für Physik, 46 (1927) pp. 1–46, doi : 10.1007/BF02055756.
  8. Weyl, H., The Theory of Groups and Quantum Mechanics (Dover, New York, 1931)
  9. Santhanam, T. S.; Tekumalla, A. R. (1976). "Quantum mechanics in finite dimensions". Foundations of Physics. 6 (5): 583. Bibcode:1976FoPh....6..583S. doi:10.1007/BF00715110. S2CID   119936801.
  10. For a serviceable review, see Vourdas A. (2004), "Quantum systems with finite Hilbert space", Rep. Prog. Phys.67 267. doi : 10.1088/0034-4885/67/3/R03.
  11. Sylvester, J.J. (1867). "Thoughts on inverse orthogonal matrices, simultaneous sign-successions, and tessellated pavements in two or more colours, with applications to Newton's rule, ornamental tile-work, and the theory of numbers". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 34 (232): 461–475. doi:10.1080/14786446708639914.
  12. Patera, J.; Zassenhaus, H. (1988). "The Pauli matrices in n dimensions and finest gradings of simple Lie algebras of type An−1". Journal of Mathematical Physics. 29 (3): 665. Bibcode:1988JMP....29..665P. doi:10.1063/1.528006.
  13. Since all indices are defined cyclically mod d, .
  14. Fairlie, D. B.; Fletcher, P.; Zachos, C. K. (1990). "Infinite-dimensional algebras and a trigonometric basis for the classical Lie algebras". Journal of Mathematical Physics. 31 (5): 1088. Bibcode:1990JMP....31.1088F. doi:10.1063/1.528788.

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Spinor</span> Non-tensorial representation of the spin group; represents fermions in physics

In geometry and physics, spinors are elements of a complex number-based vector space that can be associated with Euclidean space. A spinor transforms linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation, but unlike geometric vectors and tensors, a spinor transforms to its negative when the space rotates through 360°. It takes a rotation of 720° for a spinor to go back to its original state. This property characterizes spinors: spinors can be viewed as the "square roots" of vectors.

In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

<span class="mw-page-title-main">Special unitary group</span> Group of unitary matrices with determinant of 1

In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1.

<span class="mw-page-title-main">Hooke's law</span> Physical law: force needed to deform a spring scales linearly with distance

In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.

<span class="mw-page-title-main">Block matrix</span> Matrix defined using smaller matrices called blocks

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.

In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.

The Clifford group encompasses a set of quantum operations that map the set of n-fold Pauli group products into itself. It is most famously studied for its use in quantum error correction.

In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix.

<span class="mw-page-title-main">Pauli group</span>

In physics and mathematics, the Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices

In mathematics, and in particular, algebra, a generalized inverse of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix .

In statistical mechanics, the corner transfer matrix describes the effect of adding a quadrant to a lattice. Introduced by Rodney Baxter in 1968 as an extension of the Kramers-Wannier row-to-row transfer matrix, it provides a powerful method of studying lattice models. Calculations with corner transfer matrices led Baxter to the exact solution of the hard hexagon model in 1980.

In mathematics, Capelli's identity, named after Alfredo Capelli (1887), is an analogue of the formula det(AB) = det(A) det(B), for certain matrices with noncommuting entries, related to the representation theory of the Lie algebra . It can be used to relate an invariant ƒ to the invariant Ωƒ, where Ω is Cayley's Ω process.

In mathematical physics, higher-dimensional gamma matrices generalize to arbitrary dimension the four-dimensional Gamma matrices of Dirac, which are a mainstay of relativistic quantum mechanics. They are utilized in relativistically invariant wave equations for fermions in arbitrary space-time dimensions, notably in string theory and supergravity. The Weyl–Brauer matrices provide an explicit construction of higher-dimensional gamma matrices for Weyl spinors. Gamma matrices also appear in generic settings in Riemannian geometry, particularly when a spin structure can be defined.

In the geometry of numbers, the Klein polyhedron, named after Felix Klein, is used to generalize the concept of continued fractions to higher dimensions.

In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith.

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

<span class="mw-page-title-main">Generalized pencil-of-function method</span>

Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.