Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. [1] Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM [2] and Google [3] to test the performance of the quantum operations.
The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators, [1] considered the implementation of sequences of Haar-random operations, but this had several practical limitations. The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al. [4] as an application of the theory of unitary t-designs. In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures. [15]
Randomized benchmarking offers several key advantages over alternative approaches to error characterization. For example, the number of experimental procedures required for full characterization of errors (called tomography) grows exponentially with the number of quantum bits (called qubits). This makes tomographic methods impractical for even small systems of just 3 or 4 qubits. In contrast, randomized benchmarking protocols are the only known approaches to error characterization that scale efficiently as number of qubits in the system increases. [4] Thus RB can be applied in practice to characterize errors in arbitrarily large quantum processors. Additionally, in experimental quantum computing, procedures for state preparation and measurement (SPAM) are also error-prone, and thus quantum process tomography is unable to distinguish errors associated with gate operations from errors associated with SPAM. In contrast, RB protocols are robust to state-preparation and measurement errors [1] [7]
Randomized benchmarking protocols estimate key features of the errors that affect a set of quantum operations by examining how the observed fidelity of the final quantum state decreases as the length of the random sequence increases. If the set of operations satisfies certain mathematical properties, [1] [4] [7] [16] [10] [11] [12] such as comprising a sequence of twirls [5] [17] with unitary two-designs, [4] then the measured decay can be shown to be an invariant exponential with a rate fixed uniquely by features of the error model.
Randomized benchmarking was proposed in Scalable noise estimation with random unitary operators, [1] where it was shown that long sequences of quantum gates sampled uniformly at random from the Haar measure on the group SU(d) would lead to an exponential decay at a rate that was uniquely fixed by the error model. They also showed, under the assumption of gate-independent errors, that the measured decay rate is directly related to an important figure of merit, the average gate fidelity and independent of the choice of initial state and any errors in the initial state, as well as the specific random sequences of quantum gates. This protocol applied for arbitrary dimension d and an arbitrary number n of qubits, where d=2n. The SU(d) RB protocol had two important limitations that were overcome in a modified protocol proposed by Dankert et al., [4] who proposed sampling the gate operations uniformly at random from any unitary two-design, such as the Clifford group. They proved that this would produce the same exponential decay rate as the random SU(d) version of the protocol proposed in Emerson et al.. [1] This follows from the observation that a random sequence of gates is equivalent to an independent sequence of twirls under that group, as conjectured in [1] and later proven in. [5] This Clifford-group approach to Randomized Benchmarking [1] [4] is the now standard method for assessing error rates in quantum computers. A variation of this protocol was proposed by NIST in 2008 [6] for the first experimental implementation of an RB-type for single qubit gates. However, the sampling of random gates in the NIST protocol was later proven not to reproduce any unitary two-design. [12] The NIST RB protocol was later shown to also produce an exponential fidelity decay, albeit with a rate that depends on non-invariant features of the error model [12]
In recent years a rigorous theoretical framework has been developed for Clifford-group RB protocols to show that they work reliably under very broad experimental conditions. In 2011 and 2012, Magesan et al. [7] [8] proved that the exponential decay rate is fully robust to arbitrary state preparation and measurement errors (SPAM). They also proved a connection between the average gate fidelity and diamond norm metric of error that is relevant to the fault-tolerant threshold. They also provided evidence that the observed decay was exponential and related to the average gate fidelity even if the error model varied across the gate operations, so-called gate-dependent errors, which is the experimentally realistic situation. In 2018, Wallman [16] and Dugas et al., [11] showed that, despite concerns raised in, [18] even under very strong gate-dependence errors the standard RB protocols produces an exponential decay at a rate that precisely measures the average gate-fidelity of the experimentally relevant errors. The results of Wallman. [16] in particular proved that the RB error rate is so robust to gate-dependent errors models that it provides an extremely sensitive tool for detecting non-Markovian errors. This follows because under a standard RB experiment only non-Markovian errors (including time-dependent Markovian errors) can produce a statistically significant deviation from an exponential decay [16]
The standard RB protocol was first implemented for single qubit gate operations in 2012 at Yale on a superconducting qubit. [19] A variation of this standard protocol that is only defined for single qubit operations was implemented by NIST in 2008 [6] on a trapped ion. The first implementation of the standard RB protocol for two-qubit gates was performed in 2012 at NIST for a system of two trapped ions [20]
Quantum teleportation is a technique for transferring quantum information from a sender at one location to a receiver some distance away. While teleportation is commonly portrayed in science fiction as a means to transfer physical objects from one location to the next, quantum teleportation only transfers quantum information. The sender does not have to know the particular quantum state being transferred. Moreover, the location of the recipient can be unknown, but to complete the quantum teleportation, classical information needs to be sent from sender to receiver. Because classical information needs to be sent, quantum teleportation cannot occur faster than the speed of light.
In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum preparation, and faulty measurements. This would allow algorithms of greater circuit depth.
A qutrit is a unit of quantum information that is realized by a 3-level quantum system, that may be in a superposition of three mutually orthogonal quantum states.
A trapped-ion quantum computer is one proposed approach to a large-scale quantum computer. Ions, or charged atomic particles, can be confined and suspended in free space using electromagnetic fields. Qubits are stored in stable electronic states of each ion, and quantum information can be transferred through the collective quantized motion of the ions in a shared trap. Lasers are applied to induce coupling between the qubit states or coupling between the internal qubit states and the external motional states.
Quantum cloning is a process that takes an arbitrary, unknown quantum state and makes an exact copy without altering the original state in any way. Quantum cloning is forbidden by the laws of quantum mechanics as shown by the no cloning theorem, which states that there is no operation for cloning any arbitrary state perfectly. In Dirac notation, the process of quantum cloning is described by:
In quantum information and quantum computing, a cluster state is a type of highly entangled state of multiple qubits. Cluster states are generated in lattices of qubits with Ising type interactions. A cluster C is a connected subset of a d-dimensional lattice, and a cluster state is a pure state of the qubits located on C. They are different from other types of entangled states such as GHZ states or W states in that it is more difficult to eliminate quantum entanglement in the case of cluster states. Another way of thinking of cluster states is as a particular instance of graph states, where the underlying graph is a connected subset of a d-dimensional lattice. Cluster states are especially useful in the context of the one-way quantum computer. For a comprehensible introduction to the topic see.
Quantum cryptography is the science of exploiting quantum mechanical properties to perform cryptographic tasks. The best known example of quantum cryptography is quantum key distribution, which offers an information-theoretically secure solution to the key exchange problem. The advantage of quantum cryptography lies in the fact that it allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical communication. For example, it is impossible to copy data encoded in a quantum state. If one attempts to read the encoded data, the quantum state will be changed due to wave function collapse. This could be used to detect eavesdropping in quantum key distribution (QKD).
In quantum computing, and more specifically in superconducting quantum computing, a transmon is a type of superconducting charge qubit designed to have reduced sensitivity to charge noise. The transmon was developed by Robert J. Schoelkopf, Michel Devoret, Steven M. Girvin, and their colleagues at Yale University in 2007. Its name is an abbreviation of the term transmission line shunted plasma oscillation qubit; one which consists of a Cooper-pair box "where the two superconductors are also [capacitively] shunted in order to decrease the sensitivity to charge noise, while maintaining a sufficient anharmonicity for selective qubit control".
Dynamical decoupling (DD) is an open-loop quantum control technique employed in quantum computing to suppress decoherence by taking advantage of rapid, time-dependent control modulation. In its simplest form, DD is implemented by periodic sequences of instantaneous control pulses, whose net effect is to approximately average the unwanted system-environment coupling to zero. Different schemes exist for designing DD protocols that use realistic bounded-strength control pulses, as well as for achieving high-order error suppression, and for making DD compatible with quantum gates. In spin systems in particular, commonly used protocols for dynamical decoupling include the Carr-Purcell and the Carr-Purcell-Meiboom-Gill schemes. They are based on the Hahn spin echo technique of applying periodic pulses to enable refocusing and hence extend the coherence times of qubits.
Linear optical quantum computing or linear optics quantum computation (LOQC), also photonic quantum computing (PQC), is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, mainly uses linear optical elements, or optical instruments (including reciprocal mirrors and waveplates) to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.
The six-state protocol (SSP) is the quantum cryptography protocol that is the version of BB84 that uses a six-state polarization scheme on three orthogonal bases.
The KLM scheme or KLM protocol is an implementation of linear optical quantum computing (LOQC), developed in 2000 by Emanuel Knill, Raymond Laflamme, and Gerard J. Milburn. This protocol allows for the creation of universal quantum computers using solely linear optical tools. The KLM protocol uses linear optical elements, single-photon sources, and photon detectors as resources to construct a quantum computation scheme involving only ancilla resources, quantum teleportations, and error corrections.
Continuous-variable (CV) quantum information is the area of quantum information science that makes use of physical observables, like the strength of an electromagnetic field, whose numerical values belong to continuous intervals. One primary application is quantum computing. In a sense, continuous-variable quantum computation is "analog", while quantum computation using qubits is "digital." In more technical terms, the former makes use of Hilbert spaces that are infinite-dimensional, while the Hilbert spaces for systems comprising collections of qubits are finite-dimensional. One motivation for studying continuous-variable quantum computation is to understand what resources are necessary to make quantum computers more powerful than classical ones.
In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.
Applying classical methods of machine learning to the study of quantum systems is the focus of an emergent area of physics research. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other examples include learning Hamiltonians, learning quantum phase transitions, and automatically generating new quantum experiments. Classical machine learning is effective at processing large amounts of experimental or calculated data in order to characterize an unknown quantum system, making its application useful in contexts including quantum information theory, quantum technologies development, and computational materials design. In this context, it can be used for example as a tool to interpolate pre-calculated interatomic potentials or directly solving the Schrödinger equation with a variational method.
Magic state distillation is a method for creating more accurate quantum states from multiple noisy ones, which is important for building fault tolerant quantum computers. It has also been linked to quantum contextuality, a concept thought to contribute to quantum computers' power.
The Eastin–Knill theorem is a no-go theorem that states: "No quantum error correcting code can have a continuous symmetry which acts transversely on physical qubits". In other words, no quantum error correcting code can transversely implement a universal gate set, where a transversal logical gate is one that can be implemented on a logical qubit by the independent action of separate physical gates on corresponding physical qubits.
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.