In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem. [1] [2] [3] The term was coined by John Preskill in 2011, [1] [4] but the concept dates to Yuri Manin's 1980 [5] and Richard Feynman's 1981 [6] proposals of quantum computing.
Conceptually, quantum supremacy involves both the engineering task of building a powerful quantum computer and the computational-complexity-theoretic task of finding a problem that can be solved by that quantum computer and has a superpolynomial speedup over the best known or possible classical algorithm for that task. [7] [8]
Examples of proposals to demonstrate quantum supremacy include the boson sampling proposal of Aaronson and Arkhipov, [9] and sampling the output of random quantum circuits. [10] [11] The output distributions that are obtained by making measurements in boson sampling or quantum random circuit sampling are flat, but structured in a way so that one cannot classically efficiently sample from a distribution that is close to the distribution generated by the quantum experiment. For this conclusion to be valid, only very mild assumptions in the theory of computational complexity have to be invoked. In this sense, quantum random sampling schemes can have the potential to show quantum supremacy. [12]
A notable property of quantum supremacy is that it can be feasibly achieved by near-term quantum computers, [4] since it does not require a quantum computer to perform any useful task [13] or use high-quality quantum error correction, [14] both of which are long-term goals. [2] Consequently, researchers view quantum supremacy as primarily a scientific goal, with relatively little immediate bearing on the future commercial viability of quantum computing. [2] Due to unpredictable possible improvements in classical computers and algorithms, quantum supremacy may be temporary or unstable, placing possible achievements under significant scrutiny. [15] [16]
In 1936, Alan Turing published his paper, “On Computable Numbers”, [17] in response to the 1900 Hilbert Problems. Turing's paper described what he called a “universal computing machine”, which later became known as a Turing machine. In 1980, Paul Benioff used Turing's paper to propose the theoretical feasibility of Quantum Computing. His paper, “The Computer as a Physical System: A Microscopic Quantum Mechanical Hamiltonian Model of Computers as Represented by Turing Machines“, [18] was the first to demonstrate that it is possible to show the reversible nature of quantum computing as long as the energy dissipated is arbitrarily small. In 1981, Richard Feynman showed that quantum mechanics could not be efficiently simulated on classical devices. [19] During a lecture, he delivered the famous quote, “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy.” [19] Soon after this, David Deutsch produced a description for a quantum Turing machine and designed an algorithm created to run on a quantum computer. [20]
In 1994, further progress toward quantum supremacy was made when Peter Shor formulated Shor's algorithm, streamlining a method for factoring integers in polynomial time. [21] In 1995, Christopher Monroe and David Wineland published their paper, “Demonstration of a Fundamental Quantum Logic Gate”, [22] marking the first demonstration of a quantum logic gate, specifically the two-bit "controlled-NOT". In 1996, Lov Grover put into motion an interest in fabricating a quantum computer after publishing his algorithm, Grover's Algorithm, in his paper, “A fast quantum mechanical algorithm for database search”. [23] In 1998, Jonathan A. Jones and Michele Mosca published “Implementation of a Quantum Algorithm to Solve Deutsch's Problem on a Nuclear Magnetic Resonance Quantum Computer”, [24] marking the first demonstration of a quantum algorithm.
Vast progress toward quantum supremacy was made in the 2000s from the first 5-qubit nuclear magnetic resonance computer (2000), the demonstration of Shor's theorem (2001), and the implementation of Deutsch's algorithm in a clustered quantum computer (2007). [25] In 2011, D-Wave Systems of Burnaby, British Columbia, Canada became the first company to sell a quantum computer commercially. [26] In 2012, physicist Nanyang Xu landed a milestone accomplishment by using an improved adiabatic factoring algorithm to factor 143. However, the methods used by Xu were met with objections. [27] Not long after this accomplishment, Google purchased its first quantum computer. [28]
Google had announced plans to demonstrate quantum supremacy before the end of 2017 with an array of 49 superconducting qubits. [29] In early January 2018, Intel announced a similar hardware program. [30] In October 2017, IBM demonstrated the simulation of 56 qubits on a classical supercomputer, thereby increasing the computational power needed to establish quantum supremacy. [31] In November 2018, Google announced a partnership with NASA that would "analyze results from quantum circuits run on Google quantum processors, and ... provide comparisons with classical simulation to both support Google in validating its hardware and establish a baseline for quantum supremacy." [32] Theoretical work published in 2018 suggested that quantum supremacy should be possible with a "two-dimensional lattice of 7×7 qubits and around 40 clock cycles" if error rates can be pushed low enough. [33] The scheme discussed was a variant of a quantum random sampling scheme in which qubits undergo random quantum circuits featuring quantum gates drawn from a universal gate set, followed by measurements in the computational basis.
On June 18, 2019, Quanta Magazine suggested that quantum supremacy could happen in 2019, according to Neven's law. [34] On September 20, 2019, the Financial Times reported that "Google claims to have reached quantum supremacy with an array of 54 qubits out of which 53 were functional, which were used to perform a series of operations in 200 seconds that would take a supercomputer about 10,000 years to complete". [35] [36] On October 23, Google officially confirmed the claims. [37] [38] [39] IBM responded by suggesting some of the claims were excessive and suggested that it could take 2.5 days instead of 10,000 years, listing techniques that a classical supercomputer may use to maximize computing speed. IBM's response is relevant as the most powerful supercomputer at the time, Summit, was made by IBM. [40] [15] [41] Researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Google's Sycamore processor and classical supercomputers [42] [43] [44] and even beating it. [45] [46] [47]
In December 2020, a group based in the University of Science and Technology of China (USTC) led by Pan Jianwei reached quantum supremacy by implementing gaussian boson sampling on 76 photons with their photonic quantum computer Jiuzhang. [48] [49] [50] The paper states that to generate the number of samples the quantum computer generates in 200 seconds, a classical supercomputer would require 2.5 billion years of computation. [3]
In October 2021, teams from USTC again reported quantum primacy by building two supercomputers called Jiuzhang 2.0 and Zuchongzhi. The light-based Jiuzhang 2.0 implemented gaussian boson sampling to detect 113 photons from a 144-mode optical interferometer and a sampling rate speed up of 1024 – a difference of 37 photons and 10 orders of magnitude over the previous Jiuzhang. [51] [52] Zuchongzhi is a programmable superconducting quantum computer that needs to be kept at extremely low temperatures to work efficiently and uses random circuit sampling to obtain 56 qubits from a tunable coupling architecture of 66 transmons—an improvement over Google's Sycamore 2019 achievement by 3 qubits, meaning a greater computational cost of classical simulation of 2 to 3 orders of magnitude. [53] [54] [55] A third study reported that Zuchongzhi 2.1 completed a sampling task that "is about 6 orders of magnitude more difficult than that of Sycamore" "in the classic simulation". [56]
In June 2022, Xanadu reported a boson sampling experiment summing to those of Google and USTC. Their setup used loops of optical fiber and multiplexing to replace the network of beam splitters by a single one which made it also more easily reconfigurable. They detected a mean of 125 up to 219 photons from 216 squeezed modes (squeezed light follows a photon number distribution so they can contain more than one photon per mode) and claim to have obtained a speedup 50 million times more than previous experiments. [57] [58]
In March of 2024, D-Wave Systems reported on an experiment using a quantum annealing based processor that out-performed classical methods including tensor networks and neural networks. They argued that no known classical approach could yield the same results as the quantum simulation within a reasonable time-frame and claimed quantum supremacy. The task performed was the simulation of the non-equilibrium dynamics of a magnetic spin system quenched through a quantum phase transition. [59]
Complexity arguments concern how the amount of some resource needed to solve a problem (generally time or memory) scales with the size of the input. In this setting, a problem consists of an inputted problem instance (a binary string) and returned solution (corresponding output string), while resources refers to designated elementary operations, memory usage, or communication. A collection of local operations allows for the computer to generate the output string. A circuit model and its corresponding operations are useful in describing both classical and quantum problems; the classical circuit model consists of basic operations such as AND gates, OR gates, and NOT gates while the quantum model consists of classical circuits and the application of unitary operations. Unlike the finite set of classical gates, there are an infinite amount of quantum gates due to the continuous nature of unitary operations. In both classical and quantum cases, complexity swells with increasing problem size. [60] As an extension of classical computational complexity theory, quantum complexity theory considers what a theoretical universal quantum computer could accomplish without accounting for the difficulty of building a physical quantum computer or dealing with decoherence and noise. [61] Since quantum information is a generalization of classical information, quantum computers can simulate any classical algorithm. [61]
Quantum complexity classes are sets of problems that share a common quantum computational model, with each model containing specified resource constraints. Circuit models are useful in describing quantum complexity classes. [62] The most useful quantum complexity class is BQP (bounded-error quantum polynomial time), the class of decision problems that can be solved in polynomial time by a universal quantum computer. Questions about BQP still remain, such as the connection between BQP and the polynomial-time hierarchy, whether or not BQP contains NP-complete problems, and the exact lower and upper bounds of the BQP class. Not only would answers to these questions reveal the nature of BQP, but they would also answer difficult classical complexity theory questions. One strategy for better understanding BQP is by defining related classes, ordering them into a conventional class hierarchy, and then looking for properties that are revealed by their relation to BQP. [63] There are several other quantum complexity classes, such as QMA (quantum Merlin Arthur) and QIP (quantum interactive polynomial time). [62]
The difficulty of proving what cannot be done with classical computing is a common problem in definitively demonstrating quantum supremacy. Contrary to decision problems that require yes or no answers, sampling problems ask for samples from probability distributions. [64] If there is a classical algorithm that can efficiently sample from the output of an arbitrary quantum circuit, the polynomial hierarchy would collapse to the third level, which is generally considered to be very unlikely. [10] [11] Boson sampling is a more specific proposal, the classical hardness of which depends upon the intractability of calculating the permanent of a large matrix with complex entries, which is a #P-complete problem. [65] The arguments used to reach this conclusion have been extended to IQP Sampling, [66] where only the conjecture that the average- and worst-case complexities of the problem are the same is needed, [64] as well as to Random Circuit Sampling, [11] which is the task replicated by the Google [38] and USTC research groups. [48]
The following are proposals for demonstrating quantum computational supremacy using current technology, often called NISQ devices. [2] Such proposals include (1) a well-defined computational problem, (2) a quantum algorithm to solve this problem, (3) a comparison best-case classical algorithm to solve the problem, and (4) a complexity-theoretic argument that, under a reasonable assumption, no classical algorithm can perform significantly better than current algorithms (so the quantum algorithm still provides a superpolynomial speedup). [7] [67]
This algorithm finds the prime factorization of an n-bit integer in time [68] whereas the best known classical algorithm requires time and the best upper bound for the complexity of this problem is . [69] It can also provide a speedup for any problem that reduces to integer factoring, including the membership problem for matrix groups over fields of odd order. [70]
This algorithm is important both practically and historically for quantum computing. It was the first polynomial-time quantum algorithm proposed for a real-world problem that is believed to be hard for classical computers. [68] Namely, it gives a superpolynomial speedup under the reasonable assumption that RSA, a well-established cryptosystem, is secure. [71]
Factoring has some benefit over other supremacy proposals because factoring can be checked quickly with a classical computer just by multiplying integers, even for large instances where factoring algorithms are intractably slow. However, implementing Shor's algorithm for large numbers is infeasible with current technology, [72] [73] so it is not being pursued as a strategy for demonstrating supremacy.
This computing paradigm based upon sending identical photons through a linear-optical network can solve certain sampling and search problems that, assuming a few complexity-theoretical conjectures (that calculating the permanent of Gaussian matrices is #P-Hard and that the polynomial hierarchy does not collapse) are intractable for classical computers. [9] However, it has been shown that boson sampling in a system with large enough loss and noise can be simulated efficiently. [74]
The largest experimental implementation of boson sampling to date had 6 modes so could handle up to 6 photons at a time. [75] The best proposed classical algorithm for simulating boson sampling runs in time for a system with n photons and m output modes. [76] [77] The algorithm leads to an estimate of 50 photons required to demonstrate quantum supremacy with boson sampling. [76] [77]
The best known algorithm for simulating an arbitrary random quantum circuit requires an amount of time that scales exponentially with the number of qubits, leading one group to estimate that around 50 qubits could be enough to demonstrate quantum supremacy. [33] Bouland, Fefferman, Nirkhe and Vazirani [11] gave, in 2018, theoretical evidence that efficiently simulating a random quantum circuit would require a collapse of the computational polynomial hierarchy. Google had announced its intention to demonstrate quantum supremacy by the end of 2017 by constructing and running a 49-qubit chip that would be able to sample distributions inaccessible to any current classical computers in a reasonable amount of time. [29] The largest universal quantum circuit simulator running on classical supercomputers at the time was able to simulate 48 qubits. [78] But for particular kinds of circuits, larger quantum circuit simulations with 56 qubits are possible. [79] This may require increasing the number of qubits to demonstrate quantum supremacy. [31] On October 23, 2019, Google published the results of this quantum supremacy experiment in the Nature article, “Quantum Supremacy Using a Programmable Superconducting Processor” in which they developed a new 53-qubit processor, named “Sycamore”, that is capable of fast, high-fidelity quantum logic gates, in order to perform the benchmark testing. Google claims that their machine performed the target computation in 200 seconds, and estimated that their classical algorithm would take 10,000 years in the world's fastest supercomputer to solve the same problem. [80] IBM disputed this claim, saying that an improved classical algorithm should be able to solve that problem in two and a half days on that same supercomputer. [81] [82] [83]
Quantum computers are much more susceptible to errors than classical computers due to decoherence and noise. [84] The threshold theorem states that a noisy quantum computer can use quantum error-correcting codes [85] [86] to simulate a noiseless quantum computer, assuming the error introduced in each computer cycle is less than some number. [87] Numerical simulations suggest that that number may be as high as 3%. [88] However, it is not yet definitively known how the resources needed for error correction will scale with the number of qubits. [89] Skeptics point to the unknown behavior of noise in scaled-up quantum systems as a potential roadblock for successfully implementing quantum computing and demonstrating quantum supremacy. [84] [90]
Some researchers have suggested that the term "quantum supremacy" should not be used, arguing that the word "supremacy" evokes distasteful comparisons to the racist belief of white supremacy. A controversial [91] [92] commentary article in the journal Nature signed by thirteen researchers asserts that the alternative phrase "quantum advantage" should be used instead. [93] John Preskill, the professor of theoretical physics at the California Institute of Technology who coined the term, has since clarified that the term was proposed to explicitly describe the moment that a quantum computer gains the ability to perform a task that a classical computer never could. He further explained that he specifically rejected the term "quantum advantage" as it did not fully encapsulate the meaning of his new term: the word "advantage" would imply that a computer with quantum supremacy would have a slight edge over a classical computer while the word "supremacy" better conveys complete ascendancy over any classical computer. [4] Nature's Philip Ball wrote in December 2020 that the term "quantum advantage" has "largely replaced" the term "quantum supremacy". [94]
A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. Theoretically a large-scale quantum computer could break some widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical (non-quantum) algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.
This is a timeline of quantum computing.
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computation—that is, any quantum algorithm can be expressed formally as a particular quantum Turing machine. However, the computationally equivalent quantum circuit is a more common model.
In quantum computing, the Gottesman–Knill theorem is a theoretical result by Daniel Gottesman and Emanuel Knill that states that stabilizer circuits–circuits that only consist of gates from the normalizer of the qubit Pauli group, also called Clifford group–can be perfectly simulated in polynomial time on a probabilistic classical computer. The Clifford group can be generated solely by using the controlled NOT, Hadamard, and phase gates ; and therefore stabilizer circuits can be constructed using only these gates.
Quantum annealing (QA) is an optimization process for finding the global minimum of a given objective function over a given set of candidate solutions, by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete with many local minima; such as finding the ground state of a spin glass or solving the traveling salesman problem. The term "quantum annealing" was first proposed in 1988 by B. Apolloni, N. Cesa Bianchi and D. De Falco as a quantum-inspired classical algorithm. It was formulated in its present form by T. Kadowaki and H. Nishimori in 1998, though an imaginary-time variant without quantum coherence had been discussed by A. B. Finnila, M. A. Gomez, C. Sebenik and J. D. Doll in 1994.
Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to perform calculations and is closely related to quantum annealing.
In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the set of languages for which, when a string is in the language, there is a polynomial-size quantum proof that convinces a polynomial time quantum verifier of this fact with high probability. Moreover, when the string is not in the language, every polynomial-size quantum state is rejected by the verifier with high probability.
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.
Quantum simulators permit the study of a quantum system in a programmable fashion. In this instance, simulators are special purpose devices designed to provide insight about specific physics problems. Quantum simulators may be contrasted with generally programmable "digital" quantum computers, which would be capable of solving a wider class of quantum problems.
Linear optical quantum computing or linear optics quantum computation (LOQC), also photonic quantum computing (PQC), is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, mainly uses linear optical elements, or optical instruments (including reciprocal mirrors and waveplates) to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.
Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate expectation values of permanents of matrices. The model consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement computing tasks which are hard to implement with classical computers by using far fewer physical resources than a full linear-optical quantum computing setup. This advantage makes it an ideal candidate for demonstrating the power of quantum computation in the near term.
Continuous-variable (CV) quantum information is the area of quantum information science that makes use of physical observables, like the strength of an electromagnetic field, whose numerical values belong to continuous intervals. One primary application is quantum computing. In a sense, continuous-variable quantum computation is "analog", while quantum computation using qubits is "digital." In more technical terms, the former makes use of Hilbert spaces that are infinite-dimensional, while the Hilbert spaces for systems comprising collections of qubits are finite-dimensional. One motivation for studying continuous-variable quantum computation is to understand what resources are necessary to make quantum computers more powerful than classical ones.
In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.
The One Clean Qubit model of computation is performed an qubit system with one pure state and maximally mixed states. This model was motivated by highly mixed states that are prevalent in Nuclear magnetic resonance quantum computers. It's described by the density matrix , where is the identity matrix. In computational complexity theory, DQC1; also known as the Deterministic quantum computation with one clean qubit is the class of decision problems solvable by a one clean qubit machine in polynomial time, upon measuring the first qubit, with an error probability of at most 1/poly(n) for all instances.
Cross-entropy benchmarking is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy. In XEB, a random quantum circuit is executed on a quantum computer multiple times in order to collect a set of samples in the form of bitstrings . The bitstrings are then used to calculate the cross-entropy benchmark fidelity via a classical computer, given by
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.
Quantum random circuits (QRC) is a concept of incorporating an element of randomness into the local unitary operations and measurements of a quantum circuit. The idea is similar to that of random matrix theory which is to use the QRC to obtain almost exact results of non-integrable, hard-to-solve problems by averaging over an ensemble of outcomes. This incorporation of randomness into the circuits has many possible advantages, some of which are (i) the validation of quantum computers, which is the method that Google used when they claimed quantum supremacy in 2019., and (ii) understanding the universal structure of non-equilibrium and thermalization processes in quantum many-body dynamics.
{{cite journal}}
: Cite journal requires |journal=
(help)