Continuous-variable (CV) quantum information is the area of quantum information science that makes use of physical observables, like the strength of an electromagnetic field, whose numerical values belong to continuous intervals. [1] [2] [3] One primary application is quantum computing. In a sense, continuous-variable quantum computation is "analog", while quantum computation using qubits is "digital." In more technical terms, the former makes use of Hilbert spaces that are infinite-dimensional, while the Hilbert spaces for systems comprising collections of qubits are finite-dimensional. [4] One motivation for studying continuous-variable quantum computation is to understand what resources are necessary to make quantum computers more powerful than classical ones. [5]
One approach to implementing continuous-variable quantum information protocols in the laboratory is through the techniques of quantum optics. [6] [7] [8] By modeling each mode of the electromagnetic field as a quantum harmonic oscillator with its associated creation and annihilation operators, one defines a canonically conjugate pair of variables for each mode, the so-called "quadratures", which play the role of position and momentum observables. These observables establish a phase space on which Wigner quasiprobability distributions can be defined. Quantum measurements on such a system can be performed using homodyne and heterodyne detectors.
Quantum teleportation of continuous-variable quantum information was achieved by optical methods in 1998. [9] [10] (Science deemed this experiment one of the "top 10" advances of the year. [11] ) In 2013, quantum-optics techniques were used to create a "cluster state", a type of preparation essential to one-way (measurement-based) quantum computation, involving over 10,000 entangled temporal modes, available two at a time. [12] In another implementation, 60 modes were simultaneously entangled in the frequency domain, in the optical frequency comb of an optical parametric oscillator. [13]
Another proposal is to modify the ion-trap quantum computer: instead of storing a single qubit in the internal energy levels of an ion, one could in principle use the position and momentum of the ion as continuous quantum variables. [14]
Continuous-variable quantum systems can be used for quantum cryptography, and in particular, quantum key distribution. [1] Quantum computing is another potential application, and a variety of approaches have been considered. [1] The first method, proposed by Seth Lloyd and Samuel L. Braunstein in 1999, was in the tradition of the circuit model: quantum logic gates are created by Hamiltonians that, in this case, are quadratic functions of the harmonic-oscillator quadratures. [5] Later, measurement-based quantum computation was adapted to the setting of infinite-dimensional Hilbert spaces. [15] [16] Yet a third model of continuous-variable quantum computation encodes finite-dimensional systems (collections of qubits) into infinite-dimensional ones. This model is due to Daniel Gottesman, Alexei Kitaev and John Preskill. [17]
In all approaches to quantum computing, it is important to know whether a task under consideration can be carried out efficiently by a classical computer. An algorithm might be described in the language of quantum mechanics, but upon closer analysis, revealed to be implementable using only classical resources. Such an algorithm would not be taking full advantage of the extra possibilities made available by quantum physics. In the theory of quantum computation using finite-dimensional Hilbert spaces, the Gottesman–Knill theorem demonstrates that there exists a set of quantum processes that can be emulated efficiently on a classical computer. Generalizing this theorem to the continuous-variable case, it can be shown that, likewise, a class of continuous-variable quantum computations can be simulated using only classical analog computations. This class includes, in fact, some computational tasks that use quantum entanglement. [18] When the Wigner quasiprobability representations of all the quantities—states, time evolutions and measurements—involved in a computation are nonnegative, then they can be interpreted as ordinary probability distributions, indicating that the computation can be modeled as an essentially classical one. [15] This type of construction can be thought of as a continuum generalization of the Spekkens toy model. [19]
Occasionally, and somewhat confusingly, the term "continuous quantum computation" is used to refer to a different area of quantum computing: the study of how to use quantum systems having finite-dimensional Hilbert spaces to calculate or approximate the answers to mathematical questions involving continuous functions. A major motivation for investigating the quantum computation of continuous functions is that many scientific problems have mathematical formulations in terms of continuous quantities. [20] A second motivation is to explore and understand the ways in which quantum computers can be more capable or powerful than classical ones. The computational complexity of a problem can be quantified in terms of the minimal computational resources necessary to solve it. In quantum computing, resources include the number of qubits available to a computer and the number of queries that can be made to that computer. The classical complexity of many continuous problems is known. Therefore, when the quantum complexity of these problems is obtained, the question as to whether quantum computers are more powerful than classical can be answered. Furthermore, the degree of the improvement can be quantified. In contrast, the complexity of discrete problems is typically unknown. For example, the classical complexity of integer factorization is unknown.
One example of a scientific problem that is naturally expressed in continuous terms is path integration. The general technique of path integration has numerous applications including quantum mechanics, quantum chemistry, statistical mechanics, and computational finance. Because randomness is present throughout quantum theory, one typically requires that a quantum computational procedure yield the correct answer, not with certainty, but with high probability. For example, one might aim for a procedure that computes the correct answer with probability at least 3/4. One also specifies a degree of uncertainty, typically by setting the maximum acceptable error. Thus, the goal of a quantum computation could be to compute the numerical result of a path-integration problem to within an error of at most ε with probability 3/4 or more. In this context, it is known that quantum algorithms can outperform their classical counterparts, and the computational complexity of path integration, as measured by the number of times one would expect to have to query a quantum computer to get a good answer, grows as the inverse of ε. [21]
Other continuous problems for which quantum algorithms have been studied include finding matrix eigenvalues, [22] phase estimation, [23] the Sturm–Liouville eigenvalue problem, [24] solving differential equations with the Feynman–Kac formula, [25] initial value problems, [26] function approximation [27] high-dimensional integration., [28] and quantum cryptography [29]
A quantum computer is a computer that takes advantage of quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states.
Quantum entanglement is the phenomenon of a group of particles being generated, interacting, or sharing spatial proximity in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.
In logic circuits, the Toffoli gate, invented by Tommaso Toffoli, is a universal reversible logic gate, which means that any classical reversible circuit can be constructed from Toffoli gates. It is also known as the "controlled-controlled-not" gate, which describes its action. It has 3-bit inputs and outputs; if the first two bits are both set to 1, it inverts the third bit, otherwise all bits stay the same.
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying, but inaccessible variables, with the additional requirement that distant events be statistically independent.
In quantum mechanics, separable states are multipartite quantum states that can be written as a convex combination of product states. Product states are multipartite quantum states that can be written as a tensor product of states in each space. The physical intuition behind these definitions is that product states have no correlation between the different degrees of freedom, while separable states might have correlations, but all such correlations can be explained as due to a classical random variable, as opposed as being due to entanglement.
Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to perform calculations and is closely related to quantum annealing.
Quantum cryptography is the science of exploiting quantum mechanical properties to perform cryptographic tasks. The best known example of quantum cryptography is quantum key distribution, which offers an information-theoretically secure solution to the key exchange problem. The advantage of quantum cryptography lies in the fact that it allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical communication. For example, it is impossible to copy data encoded in a quantum state. If one attempts to read the encoded data, the quantum state will be changed due to wave function collapse. This could be used to detect eavesdropping in quantum key distribution (QKD).
In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
The noisy-storage model refers to a cryptographic model employed in quantum cryptography. It assumes that the quantum memory device of an attacker (adversary) trying to break the protocol is imperfect (noisy). The main goal of this model is to enable the secure implementation of two-party cryptographic primitives, such as bit commitment, oblivious transfer and secure identification.
Nicolas Jean Cerf is a Belgian physicist. He is professor of quantum mechanics and information theory at the Université Libre de Bruxelles and a member of the Royal Academies for Science and the Arts of Belgium. He received his Ph.D. at the Université Libre de Bruxelles in 1993, and was a researcher at the Université de Paris 11 and the California Institute of Technology. He is the director of the Center for Quantum Information and Computation at the Université Libre de Bruxelles.
Linear optical quantum computing or linear optics quantum computation (LOQC), also photonic quantum computing (PQC), is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, mainly uses linear optical elements, or optical instruments (including reciprocal mirrors and waveplates) to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.
The six-state protocol (SSP) is the quantum cryptography protocol that is the version of BB84 that uses a six-state polarization scheme on three orthogonal bases.
In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem. The term was coined by John Preskill in 2012, but the concept dates to Yuri Manin's 1980 and Richard Feynman's 1981 proposals of quantum computing.
In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.
Applying classical methods of machine learning to the study of quantum systems is the focus of an emergent area of physics research. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other examples include learning Hamiltonians, learning quantum phase transitions, and automatically generating new quantum experiments. Classical machine learning is effective at processing large amounts of experimental or calculated data in order to characterize an unknown quantum system, making its application useful in contexts including quantum information theory, quantum technologies development, and computational materials design. In this context, it can be used for example as a tool to interpolate pre-calculated interatomic potentials or directly solving the Schrödinger equation with a variational method.
The Eastin–Knill theorem is a no-go theorem that states: "No quantum error correcting code can have a continuous symmetry which acts transversely on physical qubits". In other words, no quantum error correcting code can transversely implement a universal gate set, where a transversal logical gate is one that can be implemented on a logical qubit by the independent action of separate physical gates on corresponding physical qubits.
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.