Part of a series of articles about |
Quantum mechanics |
---|
In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions. [1] [2] [3] [4] [5]
Quantum nonlocality does not allow for faster-than-light communication, [6] and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. [7] Still, it prompts many of the foundational discussions concerning quantum theory. [7]
In the 1935 EPR paper, [8] Albert Einstein, Boris Podolsky and Nathan Rosen described "two spatially separated particles which have both perfectly correlated positions and momenta" [9] as a direct consequence of quantum theory. They intended to use the classical principle of locality to challenge the idea that the quantum wavefunction was a complete description of reality, but instead they sparked a debate on the nature of reality. [10] Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, [11] which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. [12] The quantum state of the two particles prior to measurement can be written as where . [13]
Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of measurements performed by the experimentalists. Alice, for example, will measure her particle to be spin-up in an average of fifty percent of measurements. However, according to the Copenhagen interpretation, Alice's measurement causes the state of the two particles to collapse, so that if Alice performs a measurement of spin in the z-direction, that is with respect to the basis , then Bob's system will be left in one of the states . Likewise, if Alice performs a measurement of spin in the x-direction, that is, with respect to the basis , then Bob's system will be left in one of the states . Schrödinger referred to this phenomenon as "steering". [14] This steering occurs in such a way that no signal can be sent by performing such a state update; quantum nonlocality cannot be used to send messages instantaneously and is therefore not in direct conflict with causality concerns in special relativity. [13]
In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states or , since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states or for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes: [8]
While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible.
Although various authors (most notably Niels Bohr) criticised the ambiguous terminology of the EPR paper, [15] [16] the thought experiment nevertheless generated a great deal of interest. Their notion of a "complete description" was later formalised by the suggestion of hidden variables that determine the statistics of measurement results, but to which an observer does not have access. [17] Bohmian mechanics provides such a completion of quantum mechanics, with the introduction of hidden variables; however the theory is explicitly nonlocal. [18] The interpretation therefore does not give an answer to Einstein's question, which was whether or not a complete description of quantum mechanics could be given in terms of local hidden variables in keeping with the "Principle of Local Action". [19]
In 1964 John Bell answered Einstein's question by showing that such local hidden variables can never reproduce the full range of statistical outcomes predicted by quantum theory. [20] Bell showed that a local hidden variable hypothesis leads to restrictions on the strength of correlations of measurement results. If the Bell inequalities are violated experimentally as predicted by quantum mechanics, then reality cannot be described by local hidden variables and the mystery of quantum nonlocal causation remains. However, Bell notes that the non-local hidden variable model of Bohm are different: [20]
This [grossly nonlocal structure] is characteristic ... of any such theory which reproduces exactly the quantum mechanical predictions.
Clauser, Horne, Shimony and Holt (CHSH) reformulated these inequalities in a manner that was more conducive to experimental testing (see CHSH inequality). [21]
In the scenario proposed by Bell (a Bell scenario), two experimentalists, Alice and Bob, conduct experiments in separate labs. At each run, Alice (Bob) conducts an experiment in her (his) lab, obtaining outcome . If Alice and Bob repeat their experiments several times, then they can estimate the probabilities , namely, the probability that Alice and Bob respectively observe the results when they respectively conduct the experiments x,y. In the following, each such set of probabilities will be denoted by just . In the quantum nonlocality slang, is termed a box. [22]
Bell formalized the idea of a hidden variable by introducing the parameter to locally characterize measurement results on each system: [20] "It is a matter of indifference ... whether λ denotes a single variable or a set ... and whether the variables are discrete or continuous". However, it is equivalent (and more intuitive) to think of as a local "strategy" or "message" that occurs with some probability when Alice and Bob reboot their experimental setup. Bell's assumption of local causality then stipulates that each local strategy defines the distributions of independent outcomes if Alice conducts experiment x and Bob conducts experiment :
Here () denotes the probability that Alice (Bob) obtains the result when she (he) conducts experiment and the local variable describing her (his) experiment has value ().
Suppose that can take values from some set . If each pair of values has an associated probability of being selected (shared randomness is allowed, i.e., can be correlated), then one can average over this distribution to obtain a formula for the joint probability of each measurement result:
A box admitting such a decomposition is called a Bell local or a classical box. Fixing the number of possible values which can each take, one can represent each box as a finite vector with entries . In that representation, the set of all classical boxes forms a convex polytope. In the Bell scenario studied by CHSH, where can take values within , any Bell local box must satisfy the CHSH inequality:
where
The above considerations apply to model a quantum experiment. Consider two parties conducting local polarization measurements on a bipartite photonic state. The measurement result for the polarization of a photon can take one of two values (informally, whether the photon is polarized in that direction, or in the orthogonal direction). If each party is allowed to choose between just two different polarization directions, the experiment fits within the CHSH scenario. As noted by CHSH, there exist a quantum state and polarization directions which generate a box with equal to . This demonstrates an explicit way in which a theory with ontological states that are local, with local measurements and only local actions cannot match the probabilistic predictions of quantum theory, disproving Einstein's hypothesis. Experimentalists such as Alain Aspect have verified the quantum violation of the CHSH inequality [1] as well as other formulations of Bell's inequality, to invalidate the local hidden variables hypothesis and confirm that reality is indeed nonlocal in the EPR sense.
Bell's demonstration is probabilistic in the sense that it shows that the precise probabilities predicted by quantum mechanics for some entangled scenarios cannot be met by a local hidden variable theory. (For short, here and henceforth "local theory" means "local hidden variables theory".) However, quantum mechanics permits an even stronger violation of local theories: a possibilistic one, in which local theories cannot even agree with quantum mechanics on which events are possible or impossible in an entangled scenario. The first proof of this kind was due to Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993 [23] The state involved is often called the GHZ state.
In 1993, Lucien Hardy demonstrated a logical proof of quantum nonlocality that, like the GHZ proof is a possibilistic proof. [24] [25] [26] It starts with the observation that the state defined below can be written in a few suggestive ways: where, as above, .
The experiment consists of this entangled state being shared between two experimenters, each of whom has the ability to measure either with respect to the basis or . We see that if they each measure with respect to , then they never see the outcome . If one measures with respect to and the other , they never see the outcomes However, sometimes they see the outcome when measuring with respect to , since
This leads to the paradox: having the outcome we conclude that if one of the experimenters had measured with respect to the basis instead, the outcome must have been or , since and are impossible. But then, if they had both measured with respect to the basis, by locality the result must have been , which is also impossible.
The work of Bancal et al. [27] generalizes Bell's result by proving that correlations achievable in quantum theory are also incompatible with a large class of superluminal hidden variable models. In this framework, faster-than-light signaling is precluded. However, the choice of settings of one party can influence hidden variables at another party's distant location, if there is enough time for a superluminal influence (of finite, but otherwise unknown speed) to propagate from one point to the other. In this scenario, any bipartite experiment revealing Bell nonlocality can just provide lower bounds on the hidden influence's propagation speed. Quantum experiments with three or more parties can, nonetheless, disprove all such non-local hidden variable models. [27]
The random variables measured in a general experiment can depend on each other in complicated ways. In the field of causal inference, such dependencies are represented via Bayesian networks: directed acyclic graphs where each node represents a variable and an edge from a variable to another signifies that the former influences the latter and not otherwise, see the figure. In a standard bipartite Bell experiment, Alice's (Bob's) setting (), together with her (his) local variable (), influence her (his) local outcome (). Bell's theorem can thus be interpreted as a separation between the quantum and classical predictions in a type of causal structures with just one hidden node . Similar separations have been established in other types of causal structures. [28] The characterization of the boundaries for classical correlations in such extended Bell scenarios is challenging, but there exist complete practical computational methods to achieve it. [29] [30]
Quantum nonlocality is sometimes understood as being equivalent to entanglement. However, this is not the case. Quantum entanglement can be defined only within the formalism of quantum mechanics, i.e., it is a model-dependent property. In contrast, nonlocality refers to the impossibility of a description of observed statistics in terms of a local hidden variable model, so it is independent of the physical model used to describe the experiment.
It is true that for any pure entangled state there exists a choice of measurements that produce Bell nonlocal correlations, but the situation is more complex for mixed states. While any Bell nonlocal state must be entangled, there exist (mixed) entangled states which do not produce Bell nonlocal correlations [31] (although, operating on several copies of some of such states, [32] or carrying out local post-selections, [33] it is possible to witness nonlocal effects). Moreover, while there are catalysts for entanglement, [34] there are none for nonlocality. [35] Finally, reasonably simple examples of Bell inequalities have been found for which the quantum state giving the largest violation is never a maximally entangled state, showing that entanglement is, in some sense, not even proportional to nonlocality. [36] [37] [38]
As shown, the statistics achievable by two or more parties conducting experiments in a classical system are constrained in a non-trivial way. Analogously, the statistics achievable by separate observers in a quantum theory also happen to be restricted. The first derivation of a non-trivial statistical limit on the set of quantum correlations, due to B. Tsirelson, [39] is known as Tsirelson's bound. Consider the CHSH Bell scenario detailed before, but this time assume that, in their experiments, Alice and Bob are preparing and measuring quantum systems. In that case, the CHSH parameter can be shown to be bounded by
Mathematically, a box admits a quantum realization if and only if there exists a pair of Hilbert spaces , a normalized vector and projection operators such that
In the following, the set of such boxes will be called . Contrary to the classical set of correlations, when viewed in probability space, is not a polytope. On the contrary, it contains both straight and curved boundaries. [40] In addition, is not closed: [41] this means that there exist boxes which can be arbitrarily well approximated by quantum systems but are themselves not quantum.
In the above definition, the space-like separation of the two parties conducting the Bell experiment was modeled by imposing that their associated operator algebras act on different factors of the overall Hilbert space describing the experiment. Alternatively, one could model space-like separation by imposing that these two algebras commute. This leads to a different definition:
admits a field quantum realization if and only if there exists a Hilbert space , a normalized vector and projection operators such that
Call the set of all such correlations .
How does this new set relate to the more conventional defined above? It can be proven that is closed. Moreover, , where denotes the closure of . Tsirelson's problem [42] consists in deciding whether the inclusion relation is strict, i.e., whether or not . This problem only appears in infinite dimensions: when the Hilbert space in the definition of is constrained to be finite-dimensional, the closure of the corresponding set equals . [42]
In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed a result in quantum complexity theory [43] that would imply that , thus solving Tsirelson's problem. [44] [45] [46] [47] [48] [49] [50]
Tsirelson's problem can be shown equivalent to Connes embedding problem, [51] [52] [53] a famous conjecture in the theory of operator algebras.
Since the dimensions of and are, in principle, unbounded, determining whether a given box admits a quantum realization is a complicated problem. In fact, the dual problem of establishing whether a quantum box can have a perfect score at a non-local game is known to be undecidable. [41] Moreover, the problem of deciding whether can be approximated by a quantum system with precision is NP-hard. [54] Characterizing quantum boxes is equivalent to characterizing the cone of completely positive semidefinite matrices under a set of linear constraints. [55]
For small fixed dimensions , one can explore, using variational methods, whether can be realized in a bipartite quantum system , with , . That method, however, can just be used to prove the realizability of , and not its unrealizability with quantum systems.
To prove unrealizability, the most known method is the Navascués–Pironio–Acín (NPA) hierarchy. [56] This is an infinite decreasing sequence of sets of correlations with the properties:
The NPA hierarchy thus provides a computational characterization, not of , but of . If , (as claimed by Ji, Natarajan, Vidick, Wright, and Yuen) then a new method to detect the non-realizability of the correlations in is needed. If Tsirelson's problem was solved in the affirmative, namely, , then the above two methods would provide a practical characterization of .
The works listed above describe what the quantum set of correlations looks like, but they do not explain why. Are quantum correlations unavoidable, even in post-quantum physical theories, or on the contrary, could there exist correlations outside which nonetheless do not lead to any unphysical operational behavior?
In their seminal 1994 paper, Popescu and Rohrlich explore whether quantum correlations can be explained by appealing to relativistic causality alone. [57] Namely, whether any hypothetical box would allow building a device capable of transmitting information faster than the speed of light. At the level of correlations between two parties, Einstein's causality translates in the requirement that Alice's measurement choice should not affect Bob's statistics, and vice versa. Otherwise, Alice (Bob) could signal Bob (Alice) instantaneously by choosing her (his) measurement setting appropriately. Mathematically, Popescu and Rohrlich's no-signalling conditions are:
Like the set of classical boxes, when represented in probability space, the set of no-signalling boxes forms a polytope. Popescu and Rohrlich identified a box that, while complying with the no-signalling conditions, violates Tsirelson's bound, and is thus unrealizable in quantum physics. Dubbed the PR-box, it can be written as:
Here take values in , and denotes the sum modulo two. It can be verified that the CHSH value of this box is 4 (as opposed to the Tsirelson bound of ). This box had been identified earlier, by Rastall [58] and Khalfin and Tsirelson. [59]
In view of this mismatch, Popescu and Rohrlich pose the problem of identifying a physical principle, stronger than the no-signalling conditions, that allows deriving the set of quantum correlations. Several proposals followed:
All these principles can be experimentally falsified under the assumption that we can decide if two or more events are space-like separated. This sets this research program aside from the axiomatic reconstruction of quantum mechanics via Generalized Probabilistic Theories.
The works above rely on the implicit assumption that any physical set of correlations must be closed under wirings. [65] This means that any effective box built by combining the inputs and outputs of a number of boxes within the considered set must also belong to the set. Closure under wirings does not seem to enforce any limit on the maximum value of CHSH. However, it is not a void principle: on the contrary, in [65] it is shown that many simple, intuitive families of sets of correlations in probability space happen to violate it.
Originally, it was unknown whether any of these principles (or a subset thereof) was strong enough to derive all the constraints defining . This state of affairs continued for some years until the construction of the almost quantum set . [66] is a set of correlations that is closed under wirings and can be characterized via semidefinite programming. It contains all correlations in , but also some non-quantum boxes . Remarkably, all boxes within the almost quantum set are shown to be compatible with the principles of NTCC, NANLC, ML and LO. There is also numerical evidence that almost-quantum boxes also comply with IC. It seems, therefore, that, even when the above principles are taken together, they do not suffice to single out the quantum set in the simplest Bell scenario of two parties, two inputs and two outputs. [66]
Nonlocality can be exploited to conduct quantum information tasks which do not rely on the knowledge of the inner workings of the prepare-and-measurement apparatuses involved in the experiment. The security or reliability of any such protocol just depends on the strength of the experimentally measured correlations . These protocols are termed device-independent.
The first device-independent protocol proposed was device-independent quantum key distribution (QKD). [67] In this primitive, two distant parties, Alice and Bob, are distributed an entangled quantum state, that they probe, thus obtaining the statistics . Based on how non-local the box happens to be, Alice and Bob estimate how much knowledge an external quantum adversary Eve (the eavesdropper) could possess on the value of Alice and Bob's outputs. This estimation allows them to devise a reconciliation protocol at the end of which Alice and Bob share a perfectly correlated one-time pad of which Eve has no information whatsoever. The one-time pad can then be used to transmit a secret message through a public channel. Although the first security analyses on device-independent QKD relied on Eve carrying out a specific family of attacks, [68] all such protocols have been recently proven unconditionally secure. [69]
Nonlocality can be used to certify that the outcomes of one of the parties in a Bell experiment are partially unknown to an external adversary. By feeding a partially random seed to several non-local boxes, and, after processing the outputs, one can end up with a longer (potentially unbounded) string of comparable randomness [70] or with a shorter but more random string. [71] This last primitive can be proven impossible in a classical setting. [72]
Device-independent (DI) randomness certification, expansion, and amplification are techniques used to generate high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography, where high-quality random numbers are essential for ensuring the security of cryptographic protocols. Randomness certification is the process of verifying that the output of a random number generator is truly random and has not been tampered with by an adversary. DI randomness certification does this verification without making assumptions about the underlying devices that generate random numbers. Instead, randomness is certified by observing correlations between the outputs of different devices that are generated using the same physical process. Recent research has demonstrated the feasibility of DI randomness certification using entangled quantum systems, such as photons or electrons. Randomness expansion is taking a small amount of initial random seed and expanding it into a much larger sequence of random numbers. In DI randomness expansion, the expansion is done using measurements of quantum systems that are prepared in a highly entangled state. The security of the expansion is guaranteed by the laws of quantum mechanics, which make it impossible for an adversary to predict the expansion output. Recent research has shown that DI randomness expansion can be achieved using entangled photon pairs and measurement devices that violate a Bell inequality. [73] Randomness amplification is the process of taking a small amount of initial random seed and increasing its randomness by using a cryptographic algorithm. In DI randomness amplification, this process is done using entanglement properties and quantum mechanics. The security of the amplification is guaranteed by the fact that any attempt by an adversary to manipulate the algorithm's output will inevitably introduce errors that can be detected and corrected. Recent research has demonstrated the feasibility of DI randomness amplification using quantum entanglement and the violation of a Bell inequality. [74]
DI randomness certification, expansion, and amplification are powerful techniques for generating high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography and are likely to become increasingly crucial as quantum computing technology advances. In addition, a milder approach called semi-DI exists where random numbers can be generated with some assumptions on the working principle of the devices, environment, dimension, energy, etc., in which it benefits from ease-of-implementation and high generation rate. [75]
Sometimes, the box shared by Alice and Bob is such that it only admits a unique quantum realization. This means that there exist measurement operators and a quantum state giving rise to such that any other physical realization of is connected to via local unitary transformations. This phenomenon, that can be interpreted as an instance of device-independent quantum tomography, was first pointed out by Tsirelson [40] and named self-testing by Mayers and Yao. [67] Self-testing is known to be robust against systematic noise, i.e., if the experimentally measured statistics are close enough to , one can still determine the underlying state and measurement operators up to error bars. [67]
The degree of non-locality of a quantum box can also provide lower bounds on the Hilbert space dimension of the local systems accessible to Alice and Bob. [76] This problem is equivalent to deciding the existence of a matrix with low completely positive semidefinite rank. [77] Finding lower bounds on the Hilbert space dimension based on statistics happens to be a hard task, and current general methods only provide very low estimates. [78] However, a Bell scenario with five inputs and three outputs suffices to provide arbitrarily high lower bounds on the underlying Hilbert space dimension. [79] Quantum communication protocols which assume a knowledge of the local dimension of Alice and Bob's systems, but otherwise do not make claims on the mathematical description of the preparation and measuring devices involved are termed semi-device independent protocols. Currently, there exist semi-device independent protocols for quantum key distribution [80] and randomness expansion. [81]
The Einstein–Podolsky–Rosen (EPR) paradox is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. Resolutions of the paradox have important implications for the interpretation of quantum mechanics.
Quantum entanglement is the phenomenon of a group of particles being generated, interacting, or sharing spatial proximity in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying, but inaccessible variables, with the additional requirement that distant events be statistically independent.
A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities, a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling", were considered, and much research has been dedicated to the question of why this is the case.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
The Born rule is a postulate of quantum mechanics that gives the probability that a measurement of a quantum system will yield a given result. In one commonly used application, it states that the probability density for finding a particle at a given position is proportional to the square of the amplitude of the system's wavefunction at that position. It was formulated and published by German physicist Max Born in July, 1926.
In quantum computing, a graph state is a special type of multi-qubit state that can be represented by a graph. Each qubit is represented by a vertex of the graph, and there is an edge between every interacting pair of qubits. In particular, they are a convenient way of representing certain types of entangled states.
In quantum information and quantum computing, a cluster state is a type of highly entangled state of multiple qubits. Cluster states are generated in lattices of qubits with Ising type interactions. A cluster C is a connected subset of a d-dimensional lattice, and a cluster state is a pure state of the qubits located on C. They are different from other types of entangled states such as GHZ states or W states in that it is more difficult to eliminate quantum entanglement in the case of cluster states. Another way of thinking of cluster states is as a particular instance of graph states, where the underlying graph is a connected subset of a d-dimensional lattice. Cluster states are especially useful in the context of the one-way quantum computer. For a comprehensible introduction to the topic see.
The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.
Quantum contextuality is a feature of the phenomenology of quantum mechanics whereby measurements of quantum observables cannot simply be thought of as revealing pre-existing values. Any attempt to do so in a realistic hidden-variable theory leads to values that are dependent upon the choice of the other (compatible) observables which are simultaneously measured. More formally, the measurement result of a quantum observable is dependent upon which other commuting observables are within the same measurement set.
The Lieb–Robinson bound is a theoretical upper limit on the speed at which information can propagate in non-relativistic quantum systems. It demonstrates that information cannot travel instantaneously in quantum theory, even when the relativity limits of the speed of light are ignored. The existence of such a finite speed was discovered mathematically by Elliott H. Lieb and Derek W. Robinson in 1972. It turns the locality properties of physical systems into the existence of, and upper bound for this speed. The bound is now known as the Lieb–Robinson bound and the speed is known as the Lieb–Robinson velocity. This velocity is always finite but not universal, depending on the details of the system under consideration. For finite-range, e.g. nearest-neighbor, interactions, this velocity is a constant independent of the distance travelled. In long-range interacting systems, this velocity remains finite, but it can increase with the distance travelled.
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is one Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others.
In quantum mechanics, weak measurement is a type of quantum measurement that results in an observer obtaining very little information about the system on average, but also disturbs the state very little. From Busch's theorem any quantum system is necessarily disturbed by measurement, but the amount of disturbance is described by a parameter called the measurement strength.
The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as
The continuous spontaneous localization (CSL) model is a spontaneous collapse model in quantum mechanics, proposed in 1989 by Philip Pearle. and finalized in 1990 Gian Carlo Ghirardi, Philip Pearle and Alberto Rimini.
The Dicke model is a fundamental model of quantum optics, which describes the interaction between light and matter. In the Dicke model, the light component is described as a single quantum mode, while the matter is described as a set of two-level systems. When the coupling between the light and matter crosses a critical value, the Dicke model shows a mean-field phase transition to a superradiant phase. This transition belongs to the Ising universality class and was realized in cavity quantum electrodynamics experiments. Although the superradiant transition bears some analogy with the lasing instability, these two transitions belong to different universality classes.
Quantum optical coherence tomography (Q-OCT) is an imaging technique that uses nonclassical (quantum) light sources to generate high-resolution images based on the Hong-Ou-Mandel effect (HOM). Q-OCT is similar to conventional OCT but uses a fourth-order interferometer that incorporates two photodetectors rather than a second-order interferometer with a single photodetector. The primary advantage of Q-OCT over OCT is insensitivity to even-order dispersion in multi-layered and scattering media.
Incompatibility of quantum measurements is a crucial concept of quantum information, addressing whether two or more quantum measurements can be performed on a quantum system simultaneously. It highlights the unique and non-classical behavior of quantum systems. This concept is fundamental to the nature of quantum mechanics and has practical applications in various quantum information processing tasks like quantum key distribution and quantum metrology.
{{cite journal}}
: Cite journal requires |journal=
(help)