Quantum volume is a metric that measures the capabilities and error rates of a quantum computer. It expresses the maximum size of square quantum circuits that can be implemented successfully by the computer. The form of the circuits is independent from the quantum computer architecture, but compiler can transform and optimize it to take advantage of the computer's features. Thus, quantum volumes for different architectures can be compared.
Quantum computers are difficult to compare. Quantum volume is a single number designed to show all around performance. It is a measurement and not a calculation, and takes into account several features of a quantum computer, starting with its number of qubits—other measures used are gate and measurement errors, crosstalk and connectivity. [1] [2] [3]
IBM defined its Quantum Volume metric [4] because a classical computer's transistor count and a quantum computer's quantum bit count aren't the same. Qubits decohere with a resulting loss of performance so a few fault tolerant bits are more valuable as a performance measure than a larger number of noisy, error-prone qubits. [5] [6]
Generally, the larger the quantum volume, the more complex the problems a quantum computer can solve. [7]
Alternative benchmarks, such as Cross-entropy benchmarking, reliable Quantum Operations per Second (rQOPS) proposed by Microsoft, Circuit Layer Operations Per Second (CLOPS) proposed by IBM and IonQ's Algorithmic Qubits, have also been proposed. [8] [9]
The quantum volume of a quantum computer was originally defined in 2018 by Nikolaj Moll et al. [10] However, since around 2021 that definition has been supplanted by IBM's 2019 redefinition. [11] [12] The original definition depends on the number of qubits N as well as the number of steps that can be executed, the circuit depth d
The circuit depth depends on the effective error rate εeff as
The effective error rate εeff is defined as the average error rate of a two-qubit gate. If the physical two-qubit gates do not have all-to-all connectivity, additional SWAP gates may be needed to implement an arbitrary two-qubit gate and εeff > ε, where ε is the error rate of the physical two-qubit gates. If more complex hardware gates are available, such as the three-qubit Toffoli gate, it is possible that εeff < ε.
The allowable circuit depth decreases when more qubits with the same effective error rate are added. So with these definitions, as soon as d(N) < N, the quantum volume goes down if more qubits are added. To run an algorithm that only requires n < N qubits on an N-qubit machine, it could be beneficial to select a subset of qubits with good connectivity. For this case, Moll et al. [10] give a refined definition of quantum volume.
where the maximum is taken over an arbitrary choice of n qubits.
In 2019, IBM's researchers modified the quantum volume definition to be an exponential of the circuit size, stating that it corresponds to the complexity of simulating the circuit on a classical computer: [4] [13]
The world record, as of September 2024 [update] , for the highest quantum volume is 221 [14] . Here is an overview of historically achieved quantum volumes:
Date | Quantum volume [a] | Qubit count | Manufacturer | System name and reference |
---|---|---|---|---|
2020, January | 25 | 28 | IBM | "Raleigh" [15] |
2020, June | 26 | 6 | Honeywell | [16] |
2020, August | 26 | 27 | IBM | Falcon r4 "Montreal" [17] |
2020, November | 27 | 10 | Honeywell | "System Model H1" [18] |
2020, December | 27 | 27 | IBM | Falcon r4 "Montreal" [19] |
2021, March | 29 | 10 | Honeywell | "System Model H1" [20] |
2021, July | 210 | 10 | Honeywell | "Honeywell System H1" [21] |
2021, December | 211 | 12 | Quantinuum (previously Honeywell) | "Quantinuum System Model H1-2" [22] |
2022, April | 28 | 27 | IBM | Falcon r10 "Prague" [23] |
2022, April | 212 | 12 | Quantinuum | "Quantinuum System Model H1-2" [24] |
2022, May | 29 | 27 | IBM | Falcon r10 "Prague" [25] |
2022, September | 213 | 20 | Quantinuum | "Quantinuum System Model H1-1" [26] |
2023, February | 27 | 24 | Alpine Quantum Technologies | "Compact Ion-Trap Quantum Computing Demonstrator" [27] |
2023, February | 215 | 20 | Quantinuum | "Quantinuum System Model H1-1" [28] |
2023, May | 216 | 32 | Quantinuum | "Quantinuum System Model H2" [29] |
2023, June | 219 | 20 | Quantinuum | "Quantinuum System Model H1-1" [30] |
2024, February | 25 | 20 | IQM | "IQM 20-qubit system" [31] |
2024, April | 220 | 20 | Quantinuum | "Quantinuum System Model H1-1" [32] |
2024, August | 221 | 56 | Quantinuum | "Quantinuum System Model H2-1" [14] |
The quantum volume benchmark defines a family of square circuits, whose number of qubits N and depth d are the same. Therefore, the output of this benchmark is a single number. However, a proposed generalization is the volumetric benchmark [33] framework, which defines a family of rectangular quantum circuits, for which N and d are uncoupled to allow the study of time/space performance trade-offs, thereby sacrificing the simplicity of a single-figure benchmark.
Volumetric benchmarks can be generalized not only to account for uncoupled N and d dimensions, but also to test different types of quantum circuits. While quantum volume benchmarks the quantum computer's ability to implement a specific type of randomized circuits, these can, in principle, be substituted by other families of random circuits, periodic circuits, [34] or algorithm-inspired circuits. Each benchmark must have a success criterion that defines whether a processor has "passed" a given test circuit.
While these data can be analyzed in many ways, a simple method of visualization is illustrating the Pareto front of the N versus d trade-off for the processor being benchmarked. This Pareto front provides information on the largest depth d a patch of a given number of qubits N can withstand, or, alternatively, the biggest patch of N qubits that can withstand executing a circuit of given depth d.
A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. Theoretically a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
This is a timeline of quantum computing.
Quantum error correction (QEC) is a set of techniques used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum state preparation, and faulty measurements. Effective quantum error correction would allow quantum computers with low qubit fidelity to execute algorithms of higher complexity or greater circuit depth.
Superconducting quantum computing is a branch of solid state physics and quantum computing that implements superconducting electronic circuits using superconducting qubits as artificial atoms, or quantum dots. For superconducting qubits, the two logic states are the ground state and the excited state, denoted respectively. Research in superconducting quantum computing is conducted by companies such as Google, IBM, IMEC, BBN Technologies, Rigetti, and Intel. Many recently developed QPUs use superconducting architecture.
In quantum computing, the threshold theorem states that a quantum computer with a physical error rate below a certain threshold can, through application of quantum error correction schemes, suppress the logical error rate to arbitrarily low levels. This shows that quantum computers can be made fault-tolerant, as an analogue to von Neumann's threshold theorem for classical computation. This result was proven by the groups of Dorit Aharanov and Michael Ben-Or; Emanuel Knill, Raymond Laflamme, and Wojciech Zurek; and Alexei Kitaev independently. These results built on a paper of Peter Shor, which proved a weaker version of the threshold theorem.
In quantum computing, and more specifically in superconducting quantum computing, a transmon is a type of superconducting charge qubit designed to have reduced sensitivity to charge noise. The transmon was developed by Robert J. Schoelkopf, Michel Devoret, Steven M. Girvin, and their colleagues at Yale University in 2007. Its name is an abbreviation of the term transmission line shunted plasma oscillation qubit; one which consists of a Cooper-pair box "where the two superconductors are also [capacitively] shunted in order to decrease the sensitivity to charge noise, while maintaining a sufficient anharmonicity for selective qubit control".
IBM Quantum Platform is an online platform allowing public and premium access to cloud-based quantum computing services provided by IBM. This includes access to a set of IBM's prototype quantum processors, a set of tutorials on quantum computation, and access to an interactive textbook. As of February 2021, there are over 20 devices on the service, six of which are freely available for the public. This service can be used to run algorithms and experiments, and explore tutorials and simulations around what might be possible with quantum computing.
In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem. The term was coined by John Preskill in 2012, but the concept dates to Yuri Manin's 1980 and Richard Feynman's 1981 proposals of quantum computing.
Jerry M. Chow is a physicist who conducts research in quantum information processing. He has worked as the manager of the Experimental Quantum Computing group at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York since 2014 and is the primary investigator of the IBM team for the IARPA Multi-Qubit Coherent Operations and Logical Qubits programs. After graduating magna cum laude with a B.A. in physics and M.S. in applied mathematics from Harvard University, he went on to earn his Ph.D. in 2010 under Robert J. Schoelkopf at Yale University. While at Yale, he participated in experiments in which superconducting qubits were coupled via a cavity bus for the first time and two-qubit algorithms were executed on a superconducting quantum processor.
In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.
Qiskit is an open-source software development kit (SDK) for working with quantum computers at the level of circuits, pulses, and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices on IBM Quantum Platform or on simulators on a local computer. It follows the circuit model for universal quantum computation, and can be used for any quantum hardware that follows this model.
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations.
Quantinuum is a quantum computing company formed by the merger of Cambridge Quantum and Honeywell Quantum Solutions. The company's H-Series trapped-ion quantum computers set the highest quantum volume to date of 1,048,576 in April 2024. This architecture supports all-to-all qubit connectivity, allowing entangled states to be created between all qubits, and enables a high fidelity of quantum states.
The current state of quantum computing is referred to as the noisy intermediate-scale quantum (NISQ) era, characterized by quantum processors containing up to 1,000 qubits which are not advanced enough yet for fault-tolerance or large enough to achieve quantum advantage. These processors, which are sensitive to their environment (noisy) and prone to quantum decoherence, are not yet capable of continuous quantum error correction. This intermediate-scale is defined by the quantum volume, which is based on the moderate number of qubits and gate fidelity. The term NISQ was coined by John Preskill in 2018.
In quantum information theory, the no low-energy trivial state (NLTS) conjecture is a precursor to a quantum PCP theorem (qPCP) and posits the existence of families of Hamiltonians with all low-energy states of non-trivial complexity. It was formulated by Michael Freedman and Matthew Hastings in 2013. An NLTS proof would be a consequence of one aspect of qPCP problems – the inability to certify an approximation of local Hamiltonians via NP completeness. In other words, an NLTS proof would be one consequence of the QMA complexity of qPCP problems. On a high level, if proved, NLTS would be one property of the non-Newtonian complexity of quantum computation. NLTS and qPCP conjectures posit the near-infinite complexity involved in predicting the outcome of quantum systems with many interacting states. These calculations of complexity would have implications for quantum computing such as the stability of entangled states at higher temperatures, and the occurrence of entanglement in natural systems. There is currently a proof of NLTS conjecture published in preprint.
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.
Microsoft Azure Quantum is a public cloud-based quantum computing platform developed by Microsoft, that offers quantum hardware, software, and solutions for developers to build quantum applications. It supports variety of quantum hardware architectures from partners including Quantinuum, IonQ, and Atom Computing. To run applications on the cloud platform, Microsoft developed the Q# quantum programming language.
Reliable Quantum Operations Per Second (rQOPS) is a metric that measures the capabilities and error rates of a quantum computer. It combines several key factors to measure how many reliable operations a computer can execute in a single second: logical error rates, clock speed, and number of reliable qubits.