Sinkhorn's theorem states that every square matrix with positive entries can be written in a certain standard form.
If A is an n×n matrix with strictly positive elements, then there exist diagonal matrices D1 and D2 with strictly positive diagonal elements such that D1AD2 is doubly stochastic. The matrices D1 and D2 are unique modulo multiplying the first matrix by a positive number and dividing the second one by the same number. [1] [2]
A simple iterative method to approach the double stochastic matrix is to alternately rescale all rows and all columns of A to sum to 1. Sinkhorn and Knopp presented this algorithm and analyzed its convergence. [3] This is essentially the same as the Iterative proportional fitting algorithm, well known in survey statistics.
The following analogue for unitary matrices is also true: for every unitary matrix U there exist two diagonal unitary matrices L and R such that LUR has each of its columns and rows summing to 1. [4]
The following extension to maps between matrices is also true (see Theorem 5 [5] and also Theorem 4.7 [6] ): given a Kraus operator that represents the quantum operation Φ mapping a density matrix into another,
that is trace preserving,
and, in addition, whose range is in the interior of the positive definite cone (strict positivity), there exist scalings xj, for j in {0,1}, that are positive definite so that the rescaled Kraus operator
is doubly stochastic. In other words, it is such that both,
as well as for the adjoint,
where I denotes the identity operator.
In the 2010s Sinkhorn's theorem came to be used to find solutions of entropy-regularised optimal transport problems. [7] This has been of interest in machine learning because such "Sinkhorn distances" can be used to evaluate the difference between data distributions and permutations. [8] [9] [10] This improves the training of machine learning algorithms, in situations where maximum likelihood training may not be the best method.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In linear algebra, an invertible complex square matrix U is unitary if its conjugate transpose U* is also its inverse, that is, if
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
Quantum decoherence is the loss of quantum coherence, the process in which a system's behaviour changes from that which can be explained by quantum mechanics to that which can be explained by classical mechanics. In quantum mechanics, particles such as electrons are described by a wave function, a mathematical representation of the quantum state of a system; a probabilistic interpretation of the wave function is used to explain various quantum effects. As long as there exists a definite phase relation between different states, the system is said to be coherent. A definite phase relationship is necessary to perform quantum computing on quantum information encoded in quantum states. Coherence is preserved under the laws of quantum physics.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In quantum mechanics, a quantum operation is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan. The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel.
In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.
In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is an orthogonal matrix and is a positive semi-definite symmetric matrix, both square and of the same size.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,
In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.
In mathematics, Stinespring's dilation theorem, also called Stinespring's factorization theorem, named after W. Forrest Stinespring, is a result from operator theory that represents any completely positive map on a C*-algebra as a composition of two completely positive maps each of which has a special form:
In mathematics, Choi's theorem on completely positive maps is a result that classifies completely positive maps between finite-dimensional (matrix) C*-algebras. An infinite-dimensional algebraic generalization of Choi's theorem is known as Belavkin's "Radon–Nikodym" theorem for completely positive maps.
In mathematics, a unistochastic matrix is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some unitary matrix.
In mathematics, an orthostochastic matrix is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some orthogonal matrix.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.
In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem, Kirwan convexity theorem.
In quantum information theory and quantum optics, the Schrödinger–HJW theorem is a result about the realization of a mixed state of a quantum system as an ensemble of pure quantum states and the relation between the corresponding purifications of the density operators. The theorem is named after physicists and mathematicians Erwin Schrödinger, Lane P. Hughston, Richard Jozsa and William Wootters. The result was also found independently by Nicolas Gisin, and by Nicolas Hadjisavvas building upon work by Ed Jaynes, while a significant part of it was likewise independently discovered by N. David Mermin. Thanks to its complicated history, it is also known by various other names such as the GHJW theorem, the HJW theorem, and the purification theorem.
Birkhoff's algorithm is an algorithm for decomposing a bistochastic matrix into a convex combination of permutation matrices. It was published by Garrett Birkhoff in 1946. It has many applications. One such application is for the problem of fair random assignment: given a randomized allocation of items, Birkhoff's algorithm can decompose it into a lottery on deterministic allocations.