In quantum mechanics, and especially quantum information and the study of open quantum systems, the trace distanceT is a metric on the space of density matrices and gives a measure of the distinguishability between two states. It is the quantum generalization of the Kolmogorov distance for classical probability distributions.
The trace distance is defined as half of the trace norm of the difference of the matrices:where is the trace norm of , and is the unique positive semidefinite such that (which is always defined for positive semidefinite ). This can be thought of as the matrix obtained from taking the algebraic square roots of its eigenvalues. For the trace distance, we more specifically have an expression of the form where is Hermitian. This quantity equals the sum of the singular values of , which being Hermitian, equals the sum of the absolute values of its eigenvalues. More explicitly, where is the -th eigenvalue of , and is its rank.
The factor of two ensures that the trace distance between normalized density matrices takes values in the range .
The trace distance can be seen as a direct quantum generalization of the total variation distance between probability distributions. Given a pair of probability distributions , their total variation distance isAttempting to directly apply this definition to quantum states raises the problem that quantum states can result in different probability distributions depending on how they are measured. A natural choice is then to consider the total variation distance between the classical probability distribution obtained measuring the two states, maximized over the possible choices of measurement, which results precisely in the trace distance between the quantum states. More explicitly, this is the quantitywith the maximization performed with respect to all possible POVMs .
To see why this is the case, we start observing that there is a unique decomposition with positive semidefinite matrices with orthogonal support. With these operators we can write concisely . Furthermore , and thus . We thus haveThis shows thatwhere denotes the classical probability distribution resulting from measuring with the POVM , , and the maximum is performed over all POVMs .
To conclude that the inequality is saturated by some POVM, we need only consider the projective measurement with elements corresponding to the eigenvectors of . With this choice,where are the eigenvalues of .
By using the Hölder duality for Schatten norms, the trace distance can be written in variational form as [1]
As for its classical counterpart, the trace distance can be related to the maximum probability of distinguishing between two quantum states:
For example, suppose Alice prepares a system in either the state or , each with probability and sends it to Bob who has to discriminate between the two states using a binary measurement. Let Bob assign the measurement outcome and a POVM element such as the outcome and a POVM element to identify the state or , respectively. His expected probability of correctly identifying the incoming state is then given by
Therefore, when applying an optimal measurement, Bob has the maximal probability
of correctly identifying in which state Alice prepared the system. [2]
The trace distance has the following properties [1]
For qubits, the trace distance is equal to half the Euclidean distance in the Bloch representation.
The fidelity of two quantum states is related to the trace distance by the inequalities
The upper bound inequality becomes an equality when and are pure states. [Note that the definition for Fidelity used here is the square of that used in Nielsen and Chuang]
The trace distance is a generalization of the total variation distance, and for two commuting density matrices, has the same value as the total variation distance of the two corresponding probability distributions.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In quantum mechanics, a density matrix is a matrix that describes an ensemble of physical systems as quantum states. It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed ensembles. Mixed ensembles arise in quantum mechanics in two different situations:
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity. Under stronger assumptions, the Berry–Esseen theorem, or Berry–Esseen inequality, gives a more quantitative result, because it also specifies the rate at which this convergence takes place by giving a bound on the maximal error of approximation between the normal distribution and the true distribution of the scaled sample mean. The approximation is measured by the Kolmogorov–Smirnov distance. In the case of independent samples, the convergence rate is n−1/2, where n is the sample size, and the constant is estimated in terms of the third absolute normalized moment.
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables.
In functional analysis and quantum information science, a positive operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of projection-valued measures (PVM) and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurement described by PVMs.
Holevo's theorem is an important limitative theorem in quantum computing, an interdisciplinary field of physics and computer science. It is sometimes called Holevo's bound, since it establishes an upper bound to the amount of information that can be known about a quantum state. It was published by Alexander Holevo in 1973.
In quantum mechanics, notably in quantum information theory, fidelity quantifies the "closeness" between two density matrices. It expresses the probability that one state will pass a test to identify as the other. It is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In quantum physics, a quantum instrument is a mathematical description of a quantum measurement, capturing both the classical and quantum outputs. It can be equivalently understood as a quantum channel that takes as input a quantum system and has as its output two systems: a classical system containing the outcome of the measurement and a quantum system containing the post-measurement state.
In mathematics, in the area of quantum information geometry, the Bures metric or Helstrom metric defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone.
In the context of quantum mechanics and quantum information theory, symmetric, informationally complete, positive operator-valued measures (SIC-POVMs) are a particular type of generalized measurement (POVM). SIC-POVMs are particularly notable thanks to their defining features of (1) being informationally complete; (2)having the minimal number of outcomes compatible with informational completeness, and (3) being highly symmetric. In this context, informational completeness is the property of a POVM of allowing to fully reconstruct input states from measurement data.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.
The Fannes–Audenaert inequality is a mathematical bound on the difference between the von Neumann entropies of two density matrices as a function of their trace distance. It was proved by Koenraad M. R. Audenaert in 2007 as an optimal refinement of Mark Fannes' original inequality, which was published in 1973. Mark Fannes is a Belgian physicist specialised in mathematical quantum mechanics, and he works at the KU Leuven. Koenraad M. R. Audenaert is a Belgian physicist and civil engineer. He currently works at University of Ulm.
In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel :
The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the most likely outcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary or Shannon entropy and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability.
Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.
The term quantum state discrimination collectively refers to quantum-informatics techniques, with the help of which, by performing a small number of measurements on a physical system, its specific quantum state can be identified. And this is provided that the set of states in which the system can be is known in advance, and we only need to determine which one it is. This assumption distinguishes such techniques from quantum tomography, which does not impose additional requirements on the state of the system, but requires many times more measurements.