A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities (i.e., it cannot be described by a local hidden-variable theory), a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling" (i.e., that they do not permit communication faster than light), were considered, and much research has been dedicated to the question of why this is the case.
The Tsirelson bounds are named after Boris S. Tsirelson (or Cirel'son, in a different transliteration), the author of the article [1] in which the first one was derived.
The first Tsirelson bound was derived as an upper bound on the correlations measured in the CHSH inequality. It states that if we have four (Hermitian) dichotomic observables , , , (i.e., two observables for Alice and two for Bob) with outcomes such that for all , then
For comparison, in the classical case (or local realistic case) the upper bound is 2, whereas if any arbitrary assignment of is allowed, it is 4. The Tsirelson bound is attained already if Alice and Bob each makes measurements on a qubit, the simplest non-trivial quantum system.
Several proofs of this bound exist, but perhaps the most enlightening one is based on the Khalfin–Tsirelson–Landau identity. If we define an observable
and , i.e., if the observables' outcomes are , then
If or , which can be regarded as the classical case, it already follows that . In the quantum case, we need only notice that , and the Tsirelson bound follows.
Tsirelson also showed that for any bipartite full-correlation Bell inequality with m inputs for Alice and n inputs for Bob, the ratio between the Tsirelson bound and the local bound is at most where and is the Grothendieck constant of order d. [2] Note that since , this bound implies the above result about the CHSH inequality.
In general, obtaining a Tsirelson bound for a given Bell inequality is a hard problem that has to be solved on a case-by-case basis. It is not even known to be decidable. The best known computational method for upperbounding it is a convergent hierarchy of semidefinite programs, the NPA hierarchy, that in general does not halt. [3] [4] The exact values are known for a few more Bell inequalities:
For the Braunstein–Caves inequalities we have that
For the WWŻB inequalities the Tsirelson bound is
For the inequality [5] the Tsirelson bound is not known exactly, but concrete realisations give a lower bound of 0.250875384514, [6] and the NPA hierarchy gives an upper bound of 0.2508753845139766. [7] It is conjectured that only infinite-dimensional quantum states can reach the Tsirelson bound.
Significant research has been dedicated to finding a physical principle that explains why quantum correlations go only up to the Tsirelson bound and nothing more. Three such principles have been found: no-advantage for non-local computation, [8] information causality [9] and macroscopic locality. [10] That is to say, if one could achieve a CHSH correlation exceeding Tsirelson's bound, all such principles would be violated. Tsirelson's bound also follows if the Bell experiment admits a strongly positive quantal measure. [11]
There are two different ways of defining the Tsirelson bound of a Bell expression. One by demanding that the measurements are in a tensor product structure, and another by demanding only that they commute. Tsirelson's problem is the question of whether these two definitions are equivalent. More formally, let
be a Bell expression, where is the probability of obtaining outcomes with the settings . The tensor product Tsirelson bound is then the supremum of the value attained in this Bell expression by making measurements and on a quantum state :
The commuting Tsirelson bound is the supremum of the value attained in this Bell expression by making measurements and such that on a quantum state :
Since tensor product algebras in particular commute, . In finite dimensions commuting algebras are always isomorphic to (direct sums of) tensor product algebras, [12] so only for infinite dimensions it is possible that . Tsirelson's problem is the question of whether for all Bell expressions .
This question was first considered by Boris Tsirelson in 1993, where he asserted without proof that . [13] Upon being asked for a proof by Antonio Acín in 2006, he realized that the one he had in mind didn't work, and issued the question as an open problem. [14] Together with Miguel Navascués and Stefano Pironio, Antonio Acín had developed an hierarchy of semidefinite programs, the NPA hierarchy, that converged to the commuting Tsirelson bound from above, [4] and wanted to know whether it also converged to the tensor product Tsirelson bound , the most physically relevant one.
Since one can produce a converging sequencing of approximations to from below by considering finite-dimensional states and observables, if , then this procedure can be combined with the NPA hierarchy to produce a halting algorithm to compute the Tsirelson bound, making it a computable number (note that in isolation neither procedure halts in general). Conversely, if is not computable, then . In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed to have proven that is not computable, thus solving Tsirelson's problem in the negative; [15] Tsirelson's problem has been shown to be equivalent to Connes' embedding problem, [16] so the same proof also implies that the Connes embedding problem is false. [17]
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."
In quantum mechanics, a density matrix is a matrix that describes an ensemble of physical systems as quantum states. It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed ensembles. Mixed ensembles arise in quantum mechanics in two different situations:
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
Quantum decoherence is the loss of quantum coherence. Quantum decoherence has been studied to understand how quantum systems convert to systems which can be explained by classical mechanics. Beginning out of attempts to extend the understanding of quantum mechanics, the theory has developed in several directions and experimental studies have confirmed some of the key issues. Quantum computing relies on quantum coherence and is one of the primary practical applications of the concept.
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.
LOCC, or local operations and classical communication, is a method in quantum information theory where a local (product) operation is performed on part of the system, and where the result of that operation is "communicated" classically to another part where usually another local operation is performed conditioned on the information received.
In functional analysis and quantum information science, a positive operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of projection-valued measures (PVM) and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurement described by PVMs.
In quantum mechanics, notably in quantum information theory, fidelity quantifies the "closeness" between two density matrices. It expresses the probability that one state will pass a test to identify as the other. It is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.
In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions.
Quantum walks are quantum analogs of classical random walks. In contrast to the classical random walk, where the walker occupies definite states and the randomness arises due to stochastic transitions between states, in quantum walks randomness arises through (1) quantum superposition of states, (2) non-random, reversible unitary evolution and (3) collapse of the wave function due to state measurements.
In quantum information and quantum computing, a cluster state is a type of highly entangled state of multiple qubits. Cluster states are generated in lattices of qubits with Ising type interactions. A cluster C is a connected subset of a d-dimensional lattice, and a cluster state is a pure state of the qubits located on C. They are different from other types of entangled states such as GHZ states or W states in that it is more difficult to eliminate quantum entanglement in the case of cluster states. Another way of thinking of cluster states is as a particular instance of graph states, where the underlying graph is a connected subset of a d-dimensional lattice. Cluster states are especially useful in the context of the one-way quantum computer. For a comprehensible introduction to the topic see.
Amplitude amplification is a technique in quantum computing which generalizes the idea behind Grover's search algorithm, and gives rise to a family of quantum algorithms. It was discovered by Gilles Brassard and Peter Høyer in 1997, and independently rediscovered by Lov Grover in 1998.
A quantum t-design is a probability distribution over either pure quantum states or unitary operators which can duplicate properties of the probability distribution over the Haar measure for polynomials of degree t or less. Specifically, the average of any polynomial function of degree t over the design is exactly the same as the average over Haar measure. Here the Haar measure is a uniform probability distribution over all quantum states or over all unitary operators. Quantum t-designs are so called because they are analogous to t-designs in classical statistics, which arose historically in connection with the problem of design of experiments. Two particularly important types of t-designs in quantum mechanics are projective and unitary t-designs.
Entanglement distillation is the transformation of N copies of an arbitrary entangled state into some number of approximately pure Bell pairs, using only local operations and classical communication.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.
Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.
The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as
In quantum physics, the "monogamy" of quantum entanglement refers to the fundamental property that it cannot be freely shared between arbitrarily many parties.