Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local." [1]
The first such result was introduced by Bell in 1964, building upon the Einstein–Podolsky–Rosen paradox, which had called attention to the phenomenon of quantum entanglement. Bell deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. Such a constraint would later be named a Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Multiple variations on Bell's theorem were put forward in the following years, using different assumptions and obtaining different Bell (or "Bell-type") inequalities.
The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. [2] More advanced experiments, known collectively as Bell tests , have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with local hidden-variable theories. [3] [4]
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, different interpretations of quantum mechanics disagree about what exactly it implies.
There are many variations on the basic idea, some employing stronger mathematical assumptions than others. [5] Significantly, Bell-type theorems do not refer to any particular theory of local hidden variables, but instead show that quantum physics violates general assumptions behind classical pictures of nature. The original theorem proved by Bell in 1964 is not the most amenable to experiment, and it is convenient to introduce the genre of Bell-type inequalities with a later example. [6]
Hypothetical characters Alice and Bob stand in widely separated locations. Their colleague Victor prepares a pair of particles and sends one to Alice and the other to Bob. When Alice receives her particle, she chooses to perform one of two possible measurements (perhaps by flipping a coin to decide which). Denote these measurements by and . Both and are binary measurements: the result of is either or , and likewise for . When Bob receives his particle, he chooses one of two measurements, and , which are also both binary.
Suppose that each measurement reveals a property that the particle already possessed. For instance, if Alice chooses to measure and obtains the result , then the particle she received carried a value of for a property . [note 1] Consider the combinationBecause both and take the values , then either or . In the former case, the quantity must equal 0, while in the latter case, . So, one of the terms on the right-hand side of the above expression will vanish, and the other will equal . Consequently, if the experiment is repeated over many trials, with Victor preparing new pairs of particles, the absolute value of the average of the combination across all the trials will be less than or equal to 2. No single trial can measure this quantity, because Alice and Bob can only choose one measurement each, but on the assumption that the underlying properties exist, the average value of the sum is just the sum of the averages for each term. Using angle brackets to denote averages This is a Bell inequality, specifically, the CHSH inequality. [6] : 115 Its derivation here depends upon two assumptions: first, that the underlying physical properties and exist independently of being observed or measured (sometimes called the assumption of realism); and second, that Alice's choice of action cannot influence Bob's result or vice versa (often called the assumption of locality). [6] : 117
Quantum mechanics can violate the CHSH inequality, as follows. Victor prepares a pair of qubits which he describes by the Bell state where and are the eigenstates of one of the Pauli matrices, Victor then passes the first qubit to Alice and the second to Bob. Alice and Bob's choices of possible measurements are also defined in terms of the Pauli matrices. Alice measures either of the two observables and : and Bob measures either of the two observables Victor can calculate the quantum expectation values for pairs of these observables using the Born rule: While only one of these four measurements can be made in a single trial of the experiment, the sum gives the sum of the average values that Victor expects to find across multiple trials. This value exceeds the classical upper bound of 2 that was deduced from the hypothesis of local hidden variables. [6] : 116 The value is in fact the largest that quantum physics permits for this combination of expectation values, making it a Tsirelson bound. [9] : 140
The CHSH inequality can also be thought of as a game in which Alice and Bob try to coordinate their actions. [10] [11] Victor prepares two bits, and , independently and at random. He sends bit to Alice and bit to Bob. Alice and Bob win if they return answer bits and to Victor, satisfying Or, equivalently, Alice and Bob win if the logical AND of and is the logical XOR of and . Alice and Bob can agree upon any strategy they desire before the game, but they cannot communicate once the game begins. In any theory based on local hidden variables, Alice and Bob's probability of winning is no greater than , regardless of what strategy they agree upon beforehand. However, if they share an entangled quantum state, their probability of winning can be as large as
Bell's 1964 paper points out that under restricted conditions, local hidden-variable models can reproduce the predictions of quantum mechanics. He then demonstrates that this cannot hold true in general. [12] Bell considers a refinement by David Bohm of the Einstein–Podolsky–Rosen (EPR) thought experiment. In this scenario, a pair of particles are formed together in such a way that they are described by a spin singlet state (which is an example of an entangled state). The particles then move apart in opposite directions. Each particle is measured by a Stern–Gerlach device, a measuring instrument that can be oriented in different directions and that reports one of two possible outcomes, representable by and . The configuration of each measuring instrument is represented by a unit vector, and the quantum-mechanical prediction for the correlation between two detectors with settings and is In particular, if the orientation of the two detectors is the same (), then the outcome of one measurement is certain to be the negative of the outcome of the other, giving . And if the orientations of the two detectors are orthogonal (), then the outcomes are uncorrelated, and . Bell proves by example that these special cases can be explained in terms of hidden variables, then proceeds to show that the full range of possibilities involving intermediate angles cannot.
Bell posited that a local hidden-variable model for these correlations would explain them in terms of an integral over the possible values of some hidden parameter : where is a probability density function. The two functions and provide the responses of the two detectors given the orientation vectors and the hidden variable: Crucially, the outcome of detector does not depend upon , and likewise the outcome of does not depend upon , because the two detectors are physically separated. Now we suppose that the experimenter has a choice of settings for the second detector: it can be set either to or to . Bell proves that the difference in correlation between these two choices of detector setting must satisfy the inequality However, it is easy to find situations where quantum mechanics violates the Bell inequality. [13] : 425–426 For example, let the vectors and be orthogonal, and let lie in their plane at a 45° angle from both of them. Then while but Therefore, there is no local hidden-variable model that can reproduce the predictions of quantum mechanics for all choices of , , and Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for. [5]
Bell's 1964 theorem requires the possibility of perfect anti-correlations: the ability to make a probability-1 prediction about the result from the second detector, knowing the result from the first. This is related to the "EPR criterion of reality", a concept introduced in the 1935 paper by Einstein, Podolsky, and Rosen. This paper posits: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." [14]
Daniel Greenberger, Michael A. Horne, and Anton Zeilinger presented a four-particle thought experiment in 1990, which David Mermin then simplified to use only three particles. [15] [16] In this thought experiment, Victor generates a set of three spin-1/2 particles described by the quantum state where as above, and are the eigenvectors of the Pauli matrix . Victor then sends a particle each to Alice, Bob, and Charlie, who wait at widely separated locations. Alice measures either or on her particle, and so do Bob and Charlie. The result of each measurement is either or . Applying the Born rule to the three-qubit state , Victor predicts that whenever the three measurements include one and two 's, the product of the outcomes will always be . This follows because is an eigenvector of with eigenvalue , and likewise for and . Therefore, knowing Alice's result for a measurement and Bob's result for a measurement, Victor can predict with probability 1 what result Charlie will return for a measurement. According to the EPR criterion of reality, there would be an "element of reality" corresponding to the outcome of a measurement upon Charlie's qubit. Indeed, this same logic applies to both measurements and all three qubits. Per the EPR criterion of reality, then, each particle contains an "instruction set" that determines the outcome of a or measurement upon it. The set of all three particles would then be described by the instruction set with each entry being either or , and each or measurement simply returning the appropriate value.
If Alice, Bob, and Charlie all perform the measurement, then the product of their results would be . This value can be deduced from because the square of either or is . Each factor in parentheses equals , so and the product of Alice, Bob, and Charlie's results will be with probability unity. But this is inconsistent with quantum physics: Victor can predict using the state that the measurement will instead yield with probability unity.
This thought experiment can also be recast as a traditional Bell inequality or, equivalently, as a nonlocal game in the same spirit as the CHSH game. [17] In it, Alice, Bob, and Charlie receive bits from Victor, promised to always have an even number of ones, that is, , and send him back bits . They win the game if have an odd number of ones for all inputs except , when they need to have an even number of ones. That is, they win the game iff . With local hidden variables the highest probability of victory they can have is 3/4, whereas using the quantum strategy above they win it with certainty. This is an example of quantum pseudo-telepathy.
In quantum theory, orthonormal bases for a Hilbert space represent measurements that can be performed upon a system having that Hilbert space. Each vector in a basis represents a possible outcome of that measurement. [note 2] Suppose that a hidden variable exists, so that knowing the value of would imply certainty about the outcome of any measurement. Given a value of , each measurement outcome – that is, each vector in the Hilbert space – is either impossible or guaranteed. A Kochen–Specker configuration is a finite set of vectors made of multiple interlocking bases, with the property that a vector in it will always be impossible when considered as belonging to one basis and guaranteed when taken as belonging to another. In other words, a Kochen–Specker configuration is an "uncolorable set" that demonstrates the inconsistency of assuming a hidden variable can be controlling the measurement outcomes. [22] : 196–201
The Kochen–Specker type of argument, using configurations of interlocking bases, can be combined with the idea of measuring entangled pairs that underlies Bell-type inequalities. This was noted beginning in the 1970s by Kochen, [23] Heywood and Redhead, [24] Stairs, [25] and Brown and Svetlichny. [26] As EPR pointed out, obtaining a measurement outcome on one half of an entangled pair implies certainty about the outcome of a corresponding measurement on the other half. The "EPR criterion of reality" posits that because the second half of the pair was not disturbed, that certainty must be due to a physical property belonging to it. [27] In other words, by this criterion, a hidden variable must exist within the second, as-yet unmeasured half of the pair. No contradiction arises if only one measurement on the first half is considered. However, if the observer has a choice of multiple possible measurements, and the vectors defining those measurements form a Kochen–Specker configuration, then some outcome on the second half will be simultaneously impossible and guaranteed.
This type of argument gained attention when an instance of it was advanced by John Conway and Simon Kochen under the name of the free will theorem. [28] [29] [30] The Conway–Kochen theorem uses a pair of entangled qutrits and a Kochen–Specker configuration discovered by Asher Peres. [31]
As Bell pointed out, some predictions of quantum mechanics can be replicated in local hidden-variable models, including special cases of correlations produced from entanglement. This topic has been studied systematically in the years since Bell's theorem. In 1989, Reinhard Werner introduced what are now called Werner states, joint quantum states for a pair of systems that yield EPR-type correlations but also admit a hidden-variable model. [32] Werner states are bipartite quantum states that are invariant under unitaries of symmetric tensor-product form: In 2004, Robert Spekkens introduced a toy model that starts with the premise of local, discretized degrees of freedom and then imposes a "knowledge balance principle" that restricts how much an observer can know about those degrees of freedom, thereby making them into hidden variables. The allowed states of knowledge ("epistemic states") about the underlying variables ("ontic states") mimic some features of quantum states. Correlations in the toy model can emulate some aspects of entanglement, like monogamy, but by construction, the toy model can never violate a Bell inequality. [33] [34]
The question of whether quantum mechanics can be "completed" by hidden variables dates to the early years of quantum theory. In his 1932 textbook on quantum mechanics, the Hungarian-born polymath John von Neumann presented what he claimed to be a proof that there could be no "hidden parameters". The validity and definitiveness of von Neumann's proof were questioned by Hans Reichenbach, in more detail by Grete Hermann, and possibly in conversation though not in print by Albert Einstein. [note 3] (Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967. [40] )
Einstein argued persistently that quantum mechanics could not be a complete theory. His preferred argument relied on a principle of locality:
The EPR thought experiment is similar, also considering two separated systems A and B described by a joint wave function. However, the EPR paper adds the idea later known as the EPR criterion of reality, according to which the ability to predict with probability 1 the outcome of a measurement upon B implies the existence of an "element of reality" within B. [42]
In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. [43] The year before, Chien-Shiung Wu and Irving Shaknov had successfully measured polarizations of photons produced in entangled pairs, thereby making the Bohm version of the EPR thought experiment practically feasible. [44]
By the late 1940s, the mathematician George Mackey had grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics. [45] Mackey conjectured that one of the postulates was redundant, and shortly thereafter, Andrew M. Gleason proved that it was indeed deducible from the other postulates. [46] [47] Gleason's theorem provided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics. [note 4] More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. [49] [50] The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined. [49] [51]
Tsung-Dao Lee came close to deriving Bell's theorem in 1960. He considered events where two kaons were produced traveling in opposite directions, and came to the conclusion that hidden variables could not explain the correlations that could be obtained in such situations. However, complications arose due to the fact that kaons decay, and he did not go so far as to deduce a Bell-type inequality. [36] : 308
Bell chose to publish his theorem in a comparatively obscure journal because it did not require page charges, in fact paying the authors who published there at the time. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. [52] While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian. [39] : 92–100, 289
Prior to proving his 1964 result, Bell also proved a result equivalent to the Kochen–Specker theorem (hence the latter is sometimes also known as the Bell–Kochen–Specker or Bell–KS theorem). However, publication of this theorem was inadvertently delayed until 1966. [49] [53] In that paper, Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least." [53]
In 1967, the unusual title Physics Physique Физика caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test in the laboratory. [54] Clauser and Stuart Freedman would go on to perform a Bell test in 1972. [55] [56] This was only a limited test, because the choice of detector settings was made before the photons had left the source. In 1982, Alain Aspect and collaborators performed the first Bell test to remove this limitation. [57] This began a trend of progressively more stringent Bell tests. The GHZ thought experiment was implemented in practice, using entangled triplets of photons, in 2000. [58] By 2002, testing the CHSH inequality was feasible in undergraduate laboratory courses. [59]
In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described by local hidden-variable theory, which would contradict the predictions of quantum mechanics.
The most prevalent loopholes in real experiments are the detection and locality loopholes. [60] The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with a spacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make local-hidden-variable explanations of Bell test violations possible. [61]
Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015. [62] [63] [64] [65] [66] Regarding these results, Alain Aspect writes that "no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics." [67]
These efforts to experimentally validate violations of the Bell inequalities would later result in Clauser, Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics. [68]
Reactions to Bell's theorem have been many and varied. Maximilian Schlosshauer, Johannes Kofler, and Zeilinger write that Bell inequalities provide "a wonderful example of how we can have a rigorous theoretical result tested by numerous experiments, and yet disagree about the implications." [69]
Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject the assumption often called counterfactual definiteness or "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense. [70] [71] For example, Roland Omnès argues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be". [72] : 531 Likewise, Rudolf Peierls took the message of Bell's theorem to be that, because the premise of locality is physically reasonable, "hidden variables cannot be introduced without abandoning some of the results of quantum mechanics". [73] [74]
This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"), [75] : 2839 as well as QBism. [76]
The Many-worlds interpretation, also known as the Everett interpretation, is dynamically local, meaning that it does not call for action at a distance, [77] : 17 and deterministic, because it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it violates an implicit assumption by Bell that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore, a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes. [78]
The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore, the violation of a Bell inequality cannot be interpreted as a proof of non-locality. [77] : 28
Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. [note 5] They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. One challenge for non-local hidden variable theories is to explain why this instantaneous communication can exist at the level of the hidden variables, but it cannot be used to send signals. [81] A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself. [82]
The transactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local. [83]
A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is necessarily correlated with the system being measured is known as superdeterministic. [60]
A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that superdeterminism cannot be dismissed. [84]
The Einstein–Podolsky–Rosen (EPR) paradox is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. Resolutions of the paradox have important implications for the interpretation of quantum mechanics.
Quantum teleportation is a technique for transferring quantum information from a sender at one location to a receiver some distance away. While teleportation is commonly portrayed in science fiction as a means to transfer physical objects from one location to the next, quantum teleportation only transfers quantum information. The sender does not have to know the particular quantum state being transferred. Moreover, the location of the recipient can be unknown, but to complete the quantum teleportation, classical information needs to be sent from sender to receiver. Because classical information needs to be sent, quantum teleportation cannot occur faster than the speed of light.
Quantum entanglement is the phenomenon of a group of particles being generated, interacting, or sharing spatial proximity in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
Quantum indeterminacy is the apparent necessary incompleteness in the description of a physical system, that has become one of the characteristics of the standard description of quantum physics. Prior to quantum physics, it was thought that
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying, but inaccessible variables, with the additional requirement that distant events be statistically independent.
A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities, a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling", were considered, and much research has been dedicated to the question of why this is the case.
In quantum information science, the Bell's states or EPR pairs are specific quantum states of two qubits that represent the simplest examples of quantum entanglement. The Bell's states are a form of entangled and normalized basis vectors. This normalization implies that the overall probability of the particles being in one of the mentioned states is 1: . Entanglement is a basis-independent result of superposition. Due to this superposition, measurement of the qubit will "collapse" it into one of its basis states with a given probability. Because of the entanglement, measurement of one qubit will "collapse" the other qubit to a state whose measurement will yield one of two possible values, where the value depends on which Bell's state the two qubits are in initially. Bell's states can be generalized to certain quantum states of multi-qubit systems, such as the GHZ state for three or more subsystems.
In physics, the no-communication theorem or no-signaling principle is a no-go theorem from quantum information theory which states that, during measurement of an entangled quantum state, it is not possible for one observer, by making a measurement of a subsystem of the total state, to communicate information to another observer. The theorem is important because, in quantum mechanics, quantum entanglement is an effect by which certain widely separated events can be correlated in ways that, at first glance, suggest the possibility of communication faster-than-light. The no-communication theorem gives conditions under which such transfer of information between two observers is impossible. These results can be applied to understand the so-called paradoxes in quantum mechanics, such as the EPR paradox, or violations of local realism obtained in tests of Bell's theorem. In these experiments, the no-communication theorem shows that failure of local realism does not lead to what could be referred to as "spooky communication at a distance".
In quantum mechanics, the Kochen–Specker (KS) theorem, also known as the Bell–KS theorem, is a "no-go" theorem proved by John S. Bell in 1966 and by Simon B. Kochen and Ernst Specker in 1967. It places certain constraints on the permissible types of hidden-variable theories, which try to explain the predictions of quantum mechanics in a context-independent way. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors.
In quantum computing, a graph state is a special type of multi-qubit state that can be represented by a graph. Each qubit is represented by a vertex of the graph, and there is an edge between every interacting pair of qubits. In particular, they are a convenient way of representing certain types of entangled states.
In quantum mechanics, notably in quantum information theory, fidelity quantifies the "closeness" between two density matrices. It expresses the probability that one state will pass a test to identify as the other. It is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.
In mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory.
In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions.
Quantum pseudo-telepathy describes the use of quantum entanglement to eliminate the need for classical communications. A nonlocal game is said to display quantum pseudo-telepathy if players who can use entanglement can win it with certainty while players without it can not. The prefix pseudo refers to the fact that quantum pseudo-telepathy does not involve the exchange of information between any parties. Instead, quantum pseudo-telepathy removes the need for parties to exchange information in some circumstances.
Quantum contextuality is a feature of the phenomenology of quantum mechanics whereby measurements of quantum observables cannot simply be thought of as revealing pre-existing values. Any attempt to do so in a realistic hidden-variable theory leads to values that are dependent upon the choice of the other (compatible) observables which are simultaneously measured. More formally, the measurement result of a quantum observable is dependent upon which other commuting observables are within the same measurement set.
In quantum mechanics, weak measurement is a type of quantum measurement that results in an observer obtaining very little information about the system on average, but also disturbs the state very little. From Busch's theorem any quantum system is necessarily disturbed by measurement, but the amount of disturbance is described by a parameter called the measurement strength.
Incompatibility of quantum measurements is a crucial concept of quantum information, addressing whether two or more quantum measurements can be performed on a quantum system simultaneously. It highlights the unique and non-classical behavior of quantum systems. This concept is fundamental to the nature of quantum mechanics and has practical applications in various quantum information processing tasks like quantum key distribution and quantum metrology.
A similar approach was arrived at independently by Simon Kochen, although never published (private communication).
The following are intended for general audiences.
The following are more technically oriented.