Quantum volume is a metric that measures the capabilities and error rates of a quantum computer. It expresses the maximum size of square quantum circuits that can be implemented successfully by the computer. The form of the circuits is independent from the quantum computer architecture, but compiler can transform and optimize it to take advantage of the computer's features. Thus, quantum volumes for different architectures can be compared.
The current world record for highest quantum volume as of April 2024 [update] is 220, accomplished by Quantinuum's H1-1 20-qubit ion trap quantum computer. [1]
Quantum computers are difficult to compare. Quantum volume is a single number designed to show all around performance. It is a measurement and not a calculation, and takes into account several features of a quantum computer, starting with its number of qubits—other measures used are gate and measurement errors, crosstalk and connectivity. [2] [3] [4]
IBM defined its Quantum Volume metric [5] because a classical computer's transistor count and a quantum computer's quantum bit count aren't the same. Qubits decohere with a resulting loss of performance so a few fault tolerant bits are more valuable as a performance measure than a larger number of noisy, error-prone qubits. [6] [7]
Generally, the larger the quantum volume, the more complex the problems a quantum computer can solve. [8]
Alternative benchmarks, such as Cross-entropy benchmarking and IonQ's Algorithmic Qubits, have also been proposed.
The quantum volume of a quantum computer was originally defined in 2018 by Nikolaj Moll et al. [9] However, since around 2021 that definition has been supplanted by IBM's 2019 redefinition. [10] [11] The original definition depends on the number of qubits N as well as the number of steps that can be executed, the circuit depth d
The circuit depth depends on the effective error rate εeff as
The effective error rate εeff is defined as the average error rate of a two-qubit gate. If the physical two-qubit gates do not have all-to-all connectivity, additional SWAP gates may be needed to implement an arbitrary two-qubit gate and εeff > ε, where ε is the error rate of the physical two-qubit gates. If more complex hardware gates are available, such as the three-qubit Toffoli gate, it is possible that εeff < ε.
The allowable circuit depth decreases when more qubits with the same effective error rate are added. So with these definitions, as soon as d(N) < N, the quantum volume goes down if more qubits are added. To run an algorithm that only requires n < N qubits on an N-qubit machine, it could be beneficial to select a subset of qubits with good connectivity. For this case, Moll et al. [9] give a refined definition of quantum volume.
where the maximum is taken over an arbitrary choice of n qubits.
In 2019, IBM's researchers modified the quantum volume definition to be an exponential of the circuit size, stating that it corresponds to the complexity of simulating the circuit on a classical computer: [5] [12]
Date | Quantum volume [lower-alpha 1] | Qubit count | Manufacturer | System name and reference |
---|---|---|---|---|
2020, January | 25 | 28 | IBM | "Raleigh" [13] |
2020, June | 26 | 6 | Honeywell | [14] |
2020, August | 26 | 27 | IBM | Falcon r4 "Montreal" [15] |
2020, November | 27 | 10 | Honeywell | "System Model H1" [16] |
2020, December | 27 | 27 | IBM | Falcon r4 "Montreal" [17] |
2021, March | 29 | 10 | Honeywell | "System Model H1" [18] |
2021, July | 210 | 10 | Honeywell | "Honeywell System H1" [19] |
2021, December | 211 | 12 | Quantinuum (previously Honeywell) | "Quantinuum System Model H1-2" [20] |
2022, April | 28 | 27 | IBM | Falcon r10 "Prague" [21] |
2022, April | 212 | 12 | Quantinuum | "Quantinuum System Model H1-2" [22] |
2022, May | 29 | 27 | IBM | Falcon r10 "Prague" [23] |
2022, September | 213 | 20 | Quantinuum | "Quantinuum System Model H1-1" [24] |
2023, February | 27 | 24 | Alpine Quantum Technologies | "Compact Ion-Trap Quantum Computing Demonstrator" [25] |
2023, February | 215 | 20 | Quantinuum | "Quantinuum System Model H1-1" [26] |
2023, May | 216 | 32 | Quantinuum | "Quantinuum System Model H2" [27] |
2023, June | 219 | 20 | Quantinuum | "Quantinuum System Model H1-1" [28] |
2024, February | 25 | 20 | IQM | "IQM 20-qubit system" [29] |
2024, April | 220 | 20 | Quantinuum | "Quantinuum System Model H1-1" [1] |
The quantum volume benchmark defines a family of square circuits, whose number of qubits N and depth d are the same. Therefore, the output of this benchmark is a single number. However, a proposed generalization is the volumetric benchmark [30] framework, which defines a family of rectangular quantum circuits, for which N and d are uncoupled to allow the study of time/space performance trade-offs, thereby sacrificing the simplicity of a single-figure benchmark.
Volumetric benchmarks can be generalized not only to account for uncoupled N and d dimensions, but also to test different types of quantum circuits. While quantum volume benchmarks the quantum computer's ability to implement a specific type of randomized circuits, these can, in principle, be substituted by other families of random circuits, periodic circuits, [31] or algorithm-inspired circuits. Each benchmark must have a success criterion that defines whether a processor has "passed" a given test circuit.
While these data can be analyzed in many ways, a simple method of visualization is illustrating the Pareto front of the N versus d trade-off for the processor being benchmarked. This Pareto front provides information on the largest depth d a patch of a given number of qubits N can withstand, or, alternatively, the biggest patch of N qubits that can withstand executing a circuit of given depth d.
A quantum computer is a computer that takes advantage of quantum mechanical phenomena.
Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.
This is a timeline of quantum computing.
Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum preparation, and faulty measurements. This would allow algorithms of greater circuit depth.
Superconducting quantum computing is a branch of solid state quantum computing that implements superconducting electronic circuits using superconducting qubits as artificial atoms, or quantum dots. For superconducting qubits, the two logic states are the ground state and the excited state, denoted respectively. Research in superconducting quantum computing is conducted by companies such as Google, IBM, IMEC, BBN Technologies, Rigetti, and Intel. Many recently developed QPUs use superconducting architecture.
Quantum programming is the process of designing or assembling sequences of instructions, called quantum circuits, using gates, switches, and operators to manipulate a quantum system for a desired outcome or results of a given experiment. Quantum circuit algorithms can be implemented on integrated circuits, conducted with instrumentation, or written in a programming language for use with a quantum computer or a quantum processor.
In quantum computing, and more specifically in superconducting quantum computing, a transmon is a type of superconducting charge qubit designed to have reduced sensitivity to charge noise. The transmon was developed by Robert J. Schoelkopf, Michel Devoret, Steven M. Girvin, and their colleagues at Yale University in 2007. Its name is an abbreviation of the term transmission line shunted plasma oscillation qubit; one which consists of a Cooper-pair box "where the two superconductors are also [capacitively] shunted in order to decrease the sensitivity to charge noise, while maintaining a sufficient anharmonicity for selective qubit control".
IBM Quantum Platform is an online platform allowing public and premium access to cloud-based quantum computing services provided by IBM. This includes access to a set of IBM's prototype quantum processors, a set of tutorials on quantum computation, and access to an interactive textbook. As of February 2021, there are over 20 devices on the service, six of which are freely available for the public. This service can be used to run algorithms and experiments, and explore tutorials and simulations around what might be possible with quantum computing.
Cloud-based quantum computing is the invocation of quantum emulators, simulators or processors through the cloud. Increasingly, cloud services are being looked on as the method for providing access to quantum processing. Quantum computers achieve their massive computing power by initiating quantum physics into processing power and when users are allowed access to these quantum-powered computers through the internet it is known as quantum computing within the cloud.
In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem. The term was coined by John Preskill in 2012, but the concept dates to Yuri Manin's 1980 and Richard Feynman's 1981 proposals of quantum computing.
Jerry M. Chow is a physicist who conducts research in quantum information processing. He has worked as the manager of the Experimental Quantum Computing group at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York since 2014 and is the primary investigator of the IBM team for the IARPA Multi-Qubit Coherent Operations and Logical Qubits programs. After graduating magna cum laude with a B.A. in physics and M.S. in applied mathematics from Harvard University, he went on to earn his Ph.D. in 2010 under Robert J. Schoelkopf at Yale University. While at Yale, he participated in experiments in which superconducting qubits were coupled via a cavity bus for the first time and two-qubit algorithms were executed on a superconducting quantum processor.
In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.
Qiskit is an open-source software development kit (SDK) for working with quantum computers at the level of circuits, pulses, and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices on IBM Quantum Platform or on simulators on a local computer. It follows the circuit model for universal quantum computation, and can be used for any quantum hardware that follows this model.
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations.
Quantinuum is a quantum computing company formed by the merger of Cambridge Quantum and Honeywell Quantum Solutions. The company's H-Series trapped-ion quantum computers set the highest quantum volume to date of 524,288. This architecture supports all-to-all qubit connectivity, allowing entangled states to be created between all qubits, and enables a high fidelity of quantum states.
In quantum information theory, the no low-energy trivial state (NLTS) conjecture is a precursor to a quantum PCP theorem (qPCP) and posits the existence of families of Hamiltonians with all low-energy states of non-trivial complexity. It was formulated by Michael Freedman and Matthew Hastings in 2013. An NLTS proof would be a consequence of one aspect of qPCP problems – the inability to certify an approximation of local Hamiltonians via NP completeness. In other words, an NLTS proof would be one consequence of the QMA complexity of qPCP problems. On a high level, if proved, NLTS would be one property of the non-Newtonian complexity of quantum computation. NLTS and qPCP conjectures posit the near-infinite complexity involved in predicting the outcome of quantum systems with many interacting states. These calculations of complexity would have implications for quantum computing such as the stability of entangled states at higher temperatures, and the occurrence of entanglement in natural systems. There is currently a proof of NLTS conjecture published in preprint.
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.
{{cite web}}
: CS1 maint: bot: original URL status unknown (link)