In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices. Here, a few classes of such matrices are summarized.
This method of generalizing the Pauli matrices refers to a generalization from a single 2-level system (qubit) to multiple such systems. In particular, the generalized Pauli matrices for a group of qubits is just the set of matrices generated by all possible products of Pauli matrices on any of the qubits. [1]
The vector space of a single qubit is and the vector space of qubits is . We use the tensor product notation
to refer to the operator on that acts as a Pauli matrix on the th qubit and the identity on all other qubits. We can also use for the identity, i.e., for any we use . Then the multi-qubit Pauli matrices are all matrices of the form
i.e., for a vector of integers between 0 and 4. Thus there are such generalized Pauli matrices if we include the identity and if we do not.
In quantum computation, it is conventional to denote the Pauli matrices with single upper case letters
This allows subscripts on Pauli matrices to indicate the qubit index. For example, in a system with 3 qubits,
Multi-qubit Pauli matrices can be written as products of single-qubit Paulis on disjoint qubits. Alternatively, when it is clear from context, the tensor product symbol can be omitted, i.e. unsubscripted Pauli matrices written consecutively represents tensor product rather than matrix product. For example:
The traditional Pauli matrices are the matrix representation of the Lie algebra generators , , and in the 2-dimensional irreducible representation of SU(2), corresponding to a spin-1/2 particle. These generate the Lie group SU(2).
For a general particle of spin , one instead utilizes the -dimensional irreducible representation.
This method of generalizing the Pauli matrices refers to a generalization from 2-level systems (Pauli matrices acting on qubits) to 3-level systems (Gell-Mann matrices acting on qutrits) and generic -level systems (generalized Gell-Mann matrices acting on qudits).
Let be the matrix with 1 in the jk-th entry and 0 elsewhere. Consider the space of complex matrices, , for a fixed .
Define the following matrices,
and
The collection of matrices defined above without the identity matrix are called the generalized Gell-Mann matrices, in dimension . [2] [3] The symbol ⊕ (utilized in the Cartan subalgebra above) means matrix direct sum.
The generalized Gell-Mann matrices are Hermitian and traceless by construction, just like the Pauli matrices. One can also check that they are orthogonal in the Hilbert–Schmidt inner product on . By dimension count, one sees that they span the vector space of complex matrices, . They then provide a Lie-algebra-generator basis acting on the fundamental representation of .
In dimensions = 2 and 3, the above construction recovers the Pauli and Gell-Mann matrices, respectively.
A particularly notable generalization of the Pauli matrices was constructed by James Joseph Sylvester in 1882. [4] These are known as "Weyl–Heisenberg matrices" as well as "generalized Pauli matrices". [5] [6]
The Pauli matrices and satisfy the following:
The so-called Walsh–Hadamard conjugation matrix is
Like the Pauli matrices, is both Hermitian and unitary. and satisfy the relation
The goal now is to extend the above to higher dimensions, .
Fix the dimension as before. Let , a root of unity. Since and , the sum of all roots annuls:
Integer indices may then be cyclically identified mod d.
Now define, with Sylvester, the shift matrix
and the clock matrix,
These matrices generalize and , respectively.
Note that the unitarity and tracelessness of the two Pauli matrices is preserved, but not Hermiticity in dimensions higher than two. Since Pauli matrices describe quaternions, Sylvester dubbed the higher-dimensional analogs "nonions", "sedenions", etc.
These two matrices are also the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces [7] [8] [9] as formulated by Hermann Weyl, and they find routine applications in numerous areas of mathematical physics. [10] The clock matrix amounts to the exponential of position in a "clock" of hours, and the shift matrix is just the translation operator in that cyclic vector space, so the exponential of the momentum. They are (finite-dimensional) representations of the corresponding elements of the Weyl-Heisenberg group on a -dimensional Hilbert space.
The following relations echo and generalize those of the Pauli matrices:
and the braiding relation,
the Weyl formulation of the CCR, and can be rewritten as
On the other hand, to generalize the Walsh–Hadamard matrix , note
Define, again with Sylvester, the following analog matrix, [11] still denoted by in a slight abuse of notation,
It is evident that is no longer Hermitian, but is still unitary. Direct calculation yields
which is the desired analog result. Thus, , a Vandermonde matrix, arrays the eigenvectors of , which has the same eigenvalues as .
When , is precisely the discrete Fourier transform matrix, converting position coordinates to momentum coordinates and vice versa.
The complete family of unitary (but non-Hermitian) independent matrices is defined as follows:
This provides Sylvester's well-known trace-orthogonal basis for , known as "nonions" , "sedenions" , etc... [12] [13]
This basis can be systematically connected to the above Hermitian basis. [14] (For instance, the powers of , the Cartan subalgebra, map to linear combinations of the matrices.) It can further be used to identify , as , with the algebra of Poisson brackets.
With respect to the Hilbert–Schmidt inner product on operators, , Sylvester's generalized Pauli operators are orthogonal and normalized to :
This can be checked directly from the above definition of .
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In geometry and physics, spinors are elements of a complex number-based vector space that can be associated with Euclidean space. A spinor transforms linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation, but unlike geometric vectors and tensors, a spinor transforms to its negative when the space rotates through 360°. It takes a rotation of 720° for a spinor to go back to its original state. This property characterizes spinors: spinors can be viewed as the "square roots" of vectors.
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.
In quantum field theory, the Dirac spinor is the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group.
In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.
In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
The Clifford group encompasses a set of quantum operations that map the set of n-fold Pauli group products into itself. It is most famously studied for its use in quantum error correction.
In physics, the Majorana equation is a relativistic wave equation. It is named after the Italian physicist Ettore Majorana, who proposed it in 1937 as a means of describing fermions that are their own antiparticle. Particles corresponding to this equation are termed Majorana particles, although that term now has a more expansive meaning, referring to any fermionic particle that is its own anti-particle.
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.
In physics and mathematics, the Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices
In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.
In mathematical physics, higher-dimensional gamma matrices generalize to arbitrary dimension the four-dimensional Gamma matrices of Dirac, which are a mainstay of relativistic quantum mechanics. They are utilized in relativistically invariant wave equations for fermions in arbitrary space-time dimensions, notably in string theory and supergravity. The Weyl–Brauer matrices provide an explicit construction of higher-dimensional gamma matrices for Weyl spinors. Gamma matrices also appear in generic settings in Riemannian geometry, particularly when a spin structure can be defined.
In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith. With small modifications to the QFT, it can also be used for performing fast integer arithmetic operations such as addition and multiplication.
Matrix completion is the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing data imputation in statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry represents the rating of movie by customer , if customer has watched movie and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the document-term matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.
Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.