The surface code is a topological quantum error correcting code, and an example of a stabilizer code, defined on a two-dimensional spin lattice. [1] The first type of surface code introduced by Alexei Kitaev in 1997 was the toric code, which gets its name from its periodic boundary conditions, giving it the shape of a torus. These conditions give the model translational invariance, which is useful for analytic study. The toric code is the simplest and most well studied of the quantum double models. [2] It is also the simplest example of topological order—Z2 topological order (first studied in the context of Z2 spin liquid in 1991). [3] [4] The toric code can also be considered to be a Z2 lattice gauge theory in a particular limit. [5]
However, on many quantum computation platforms, experimental realization of a surface code is much easier if the code can be embedded on a 2D plane. This motivated the design of another type of surface code with open boundary conditions, the planar code. [6]
The surface code is defined on a two-dimensional lattice, usually chosen to be the square lattice. Below, we will first illustrate the basic concept with the toric code, where the lattice has periodic boundary conditions, i.e., the top boundary is connected to the bottom and the left boundary to the right. Topologically, this is equivalent to defining the lattice on a torus.
A qubit is located on each edge of the lattice. For a d × d lattice, there are d2 horizontal edges and d2 vertical edges, thus 2d2 qubits in total. Stabilizer operators are defined on the qubits around each vertex v and plaquette (face) p of the lattice as follows:
Here denotes the edges touching the vertex v, and denotes the edges surrounding the plaquette . The code space of the toric code is the subspace for which all stabilizers act trivially, hence for any state in this space it holds that
For the toric code, this space is four-dimensional, and so can be used to store two qubits of quantum information. This can be proven by considering the number of independent stabilizer operators: For a d × d lattice, there are d2 vertex stabilizers and d2 plaquette stabilizers, but the product of all vertex stabilizers is I and so is the product of all plaquette stabilizers. Therefore there are 2d2 − 2 independent stabilizers, leaving 2 qubits worth of degrees of freedom.
The occurrence of errors will usually move the state out of the stabilizer space, resulting in vertices and plaquettes for which the above condition does not hold. Specifically, a Pauli Z error on qubit i flips the two vertex stabilizers Av such that (the endpoints of the edge i), and a Pauli X error on qubit i flips the two plaquette stabilizers Bp such that (the plaquettes on either side of the edge i). The positions of these violations is the syndrome of the code, which can be used for error correction.
The unique nature of topological codes such as the surface code is that stabilizer violations can be interpreted as quasiparticles. Specifically, if the code is in a state such that,
,
a quasiparticle known as an anyon can be said to exist on the vertex . Similarly violations of the are associated with so called anyons on the plaquettes. The stabilizer space therefore corresponds to the anyonic vacuum. Single-qubit errors cause pairs of anyons to be created and transported around the lattice.
When errors create an anyon pair and move the anyons, one can imagine a path connecting the two composed of all links acted upon. If the anyons then meet and are annihilated, this path describes a loop. If the loop is topologically trivial, it has no effect on the stored information. The annihilation of the anyons, in this case, corrects all of the errors involved in their creation and transport. However, if the loop is topologically non-trivial, though re-annihilation of the anyons returns the state to the stabilizer space, it also implements a logical operation on the stored information. The errors, in this case, are therefore not corrected but consolidated.
Consider the noise model for which bit and phase errors occur independently on each qubit, both with probability p. When p is low, this will create sparsely distributed pairs of anyons which have not moved far from their point of creation. Correction can be achieved by identifying the pairs that the anyons were created in (up to an equivalence class), and then re-annihilating them to remove the errors. As p increases, however, it becomes more ambiguous as to how the anyons may be paired without risking the formation of topologically non-trivial loops. This gives a threshold probability, under which the error correction will almost certainly succeed. Through a mapping to the random-bond Ising model, this critical probability has been found to be around 11%. [7]
Other error models may also be considered, and thresholds found. In all cases studied so far, the code has been found to saturate the Hashing bound. For some error models, such as biased errors where bit errors occur more often than phase errors or vice versa, lattices other than the square lattice must be used to achieve the optimal thresholds. [8] [9]
These thresholds are upper limits and are useless unless efficient algorithms are found to achieve them. The most well-used algorithm is minimum weight perfect matching. [10] When applied to the noise model with independent bit and flip errors, a threshold of around 10.5% is achieved. This falls only a little short of the 11% maximum. However, matching does not work so well when there are correlations between the bit and phase errors, such as with depolarizing noise.
When adapting the surface code to open boundary conditions, special boundary behaviors arise. As a motivating example, consider defining a surface code in the same way as above, but on an n × n square grid graph. Some vertices on the boundary will have degree 3 instead of 4 (and the corner vertices will have degree 2), so there will be some weight-3 (and weight-2) X stabilizers.
The most important characteristic of such an open boundary is that a Pauli X error no longer necessarily flips two plaquette stabilizers. An edge on the boundary only has one adjacent plaquette, and thus an X error on the corresponding qubit will only flip a single , or in the language of anyons, only create or annihilate a single m anyon. One could say that this type of code boundary (known as a smooth boundary) is a source and sink for m anyons.
In a surface code with open boundaries, in addition to true loops, one needs to consider paths that start and end at a boundary, which are usually specific to anyon types. In our n × n grid graph example, an m anyon can be created at any location on the boundary, move across the grid, and be annihilated at any other location on the boundary. However, since there is only one type of boundary, all these paths are topologically trivial. For example, if an m anyon is created somewhere in the middle of the top boundary, moves one step horizontally, then is annihilated again by the top boundary, then its path corresponds to one of the weight-3 X stabilizers mentioned above. Meanwhile, e anyons can only move within the boundary, so all e anyon loops are topologically trivial too. This indicates that this code does not encode any logical qubit, which can be verified by counting qubits and stabilizers: There are 2n(n − 1) qubits (lattice edges), n2 − 1 independent vertex stabilizers, and (n − 1)2 independent plaquette stabilizers (due to the boundary, the product of all plaquette stabilizers is no longer I and all of them are independent), and 2n(n − 1) − (n2 − 1) − (n − 1)2 = 0.
To design an open-boundary surface code with a non-trivial codespace, it is necessary to use another type of boundary, the rough boundary which acts as the dual of the smooth boundary. To create a rough boundary, we start from the smooth boundary and remove the edges (qubits) on the boundary that only neighbor one plaquette, but keep those plaquettes as weight-3 Z stabilizers. The vertex stabilizers on the original boundary are now weight-1 and no longer properly commute with the modified plaquette stabilizers, so those vertex stabilizers are removed too, leaving some "dangling" edges on the lattice (thus the name "rough"). The result is a boundary that acts as a source and sink for e anyons.
Now consider a lattice with smooth top and bottom boundaries, and rough left and right boundaries. An m anyon moving from the top boundary to the top boundary is still a topologically trivial path, but one moving from the top boundary to the bottom boundary is no longer topologically trivial, because the anyon could not have exited at the left or right boundary anymore. Similarly, the topologically non-trivial path for e anyons is one moving from the left boundary to the right boundary. If the original grid has d rows and d + 1 columns of vertices (before removing the vertex stabilizers on the left and right boundaries), then both types of topologically non-trivial paths have minimum length d, indicating that the code encodes a single logical qubit with code distance d. The number of logical qubits can again be checked by counting stabilizers: (d2 + (d − 1)2) − d(d − 1) − d(d − 1) = 1 (now the vertex stabilizers are all independent too, due to the "dangling edges" that are only part of one vertex stabilizer).
The means to perform quantum computation on logical information stored within the surface code has been considered, with the properties of the code providing fault-tolerance. It has been shown that extending the stabilizer space using 'holes', vertices or plaquettes on which stabilizers are not enforced, allows many qubits to be encoded into the code. However, a universal set of unitary gates cannot be fault-tolerantly implemented by unitary operations[ clarification needed ] and so additional techniques are required to achieve quantum computing. For example, universal quantum computing can be achieved by preparing magic states via encoded quantum stubs called tidBits used to teleport in the required additional gates when replaced as a qubit. Furthermore, preparation of magic states must be fault tolerant, which can be achieved by magic state distillation on noisy magic states. A measurement based scheme for quantum computation based upon this principle has been found, whose error threshold is the highest known for a two-dimensional architecture. [11] [12]
Since the stabilizer operators of the surface code are quasilocal, acting only on spins located near each other on a two-dimensional lattice, it is not unrealistic to define the following Hamiltonian,
The ground state space of this Hamiltonian is the stabilizer space of the code. Excited states correspond to those of anyons, with the energy proportional to their number. Local errors are therefore energetically suppressed by the gap, which has been shown to be stable against local perturbations. [13] However, the dynamic effects of such perturbations can still cause problems for the code. [14] [15]
The gap also gives the code a certain resilience against thermal errors, allowing it to be correctable almost surely for a certain critical time. This time increases with , but since arbitrary increases of this coupling are unrealistic, the protection given by the Hamiltonian still has its limits.
The means to make a surface code into a fully self-correcting quantum memory is often considered. Self-correction means that the Hamiltonian will naturally suppress errors indefinitely, leading to a lifetime that diverges in the thermodynamic limit. It has been found that this is possible in the toric code only if long range interactions are present between anyons. [16] [17] Proposals have been made for realization of these in the lab [18] Another approach is the generalization of the model to higher dimensions, with self-correction possible in 4D with only quasi-local interactions. [19]
As mentioned above, so called and quasiparticles are associated with the vertices and plaquettes of the model, respectively. These quasiparticles can be described as anyons, due to the non-trivial effect of their braiding. Specifically, though both species of anyons are bosonic with respect to themselves, the braiding of two 's or 's having no effect, a full monodromy of an and an will yield a phase of . Such a result is not consistent with either bosonic or fermionic statistics, and hence is anyonic.
The anyonic mutual statistics of the quasiparticles demonstrate the logical operations performed by topologically non-trivial loops. Consider the creation of a pair of anyons followed by the transport of one around a topologically nontrivial loop, such as that shown on the torus in blue on the figure above, before the pair are reannhilated. The state is returned to the stabilizer space, but the loop implements a logical operation on one of the stored qubits. If anyons are similarly moved through the red loop above a logical operation will also result. The phase of resulting when braiding the anyons shows that these operations do not commute, but rather anticommute. They may therefore be interpreted as logical and Pauli operators on one of the stored qubits. The corresponding logical Pauli's on the other qubit correspond to an anyon following the blue loop and an anyon following the red. No braiding occurs when and pass through parallel paths, the phase of therefore does not arise and the corresponding logical operations commute. This is as should be expected since these form operations acting on different qubits.
Due to the fact that both and anyons can be created in pairs, it is clear to see that both these quasiparticles are their own antiparticles. A composite particle composed of two anyons is therefore equivalent to the vacuum, since the vacuum can yield such a pair and such a pair will annihilate to the vacuum. Accordingly, these composites have bosonic statistics, since their braiding is always completely trivial. A composite of two anyons is similarly equivalent to the vacuum. The creation of such composites is known as the fusion of anyons, and the results can be written in terms of fusion rules. In this case, these take the form,
Where denotes the vacuum. A composite of an and an is not trivial. This therefore constitutes another quasiparticle in the model, sometimes denoted , with fusion rule,
From the braiding statistics of the anyons we see that, since any single exchange of two 's will involve a full monodromy of a constituent and , a phase of will result. This implies fermionic self-statistics for the 's.
Since the Hamiltonian is a sum of commuting projectors with eigenvalues , the ground state is the state that is a +1 eigenstate of every single star and plaquette operator:
This is the "frustration-free" ground state, where all local constraints are satisfied.
Excitations above the ground state correspond to violations of these conditions.
These excitations are created in pairs at the ends of "string" operators. Applying a string of operators along a path creates e-particles at its endpoints. Applying a string of operators along a path on the dual lattice creates m-particles at its endpoints. The energy of an excited state is proportional to the number of such violations, leading to a gapped energy spectrum. These particles are examples of anyons due to their non-trivial braiding statistics.
A defining feature of the surface code, and of topological order in general, is that its quasiparticle excitations exhibit non-trivial braiding statistics. While the electric charges (e) and magnetic fluxes (m) are both individually bosons with respect to themselves, they have a non-trivial mutual statistics with respect to each other. Specifically, they are mutual semions: adiabatically moving an e particle in a full counterclockwise cycle around an m particle imparts a phase of -1 to the system's wavefunction. This is a topological analogue of the Aharonov–Bohm effect, where the role of the electromagnetic vector potential is played by the non-local presence of the other quasiparticle.
This property is a direct consequence of the algebraic structure of the stabilizer operators and the string operators that create the particles. The argument can be understood as follows:
1. Creating the Quasiparticles: We begin in the ground state |Ψ0⟩, which is annihilated by all stabilizer operators ( and for all s, p).
2. The Braiding Process: Consider the process of moving an e charge in a closed loop C that encloses the plaquette p where the stationary m particle resides. The final state of the system is given by applying the loop operator to the state containing the m particle:
3. The Algebraic Origin of the Phase: The key insight comes from the commutation relation between the loop operator and the string operator .
Because of this single anti-commutation, the operators as a whole anti-commute:
4. Deriving the Phase Factor: We can now substitute this back into the expression for the final state:
The operator is a closed loop of Z operators. Any such operator can be written as a product of the vertex stabilizers for all vertices s inside the loop. [20] Since the ground state |Ψ0⟩ is an eigenstate of all with eigenvalue +1, the loop operator leaves the ground state unchanged: . Therefore, the final state is:
The system returns to its initial state (a single m particle at plaquette p), but its wavefunction has acquired a phase factor of -1. This result is topological because it is independent of the exact shape of the loop C, so long as it encloses the m particle. This non-trivial mutual statistics is a fundamental signature of the topological order present in the toric code and is the basis for proposals to use such systems for fault-tolerant quantum information processing. [21]
On a torus, the local constraints and are not sufficient to uniquely define the ground state. The non-trivial topology allows for the existence of non-local operators that commute with the Hamiltonian but act non-trivially within the ground state subspace. These are the logical operators or Wilson loops.
A torus has two independent non-contractible loops (or cycles), often denoted (e.g., "horizontal") and (e.g., "vertical"). We can define four logical operators corresponding to strings of Pauli operators wrapping around these loops.
1. Electric Wilson Loops (): These are products of operators along the non-contractible loops.
* *
2. Magnetic 't Hooft Loops (): These are products of operators along non-contractible loops on the dual lattice, denoted and , which intersect and respectively.
* *
These loop operators all commute with the Hamiltonian . For example, a loop operator anticommutes with the two star operators at the "ends" of each link in its path. Since the path is a closed loop, it anticommutes with every neighboring star operator twice, resulting in commutation. A similar argument holds for the magnetic loops and plaquette operators.
The key to the ground state degeneracy lies in the commutation relations between these logical operators.
Let's examine the commutation of and .
The two strings of operators commute for every qubit except for the single qubit where the loops and intersect. At that intersection point, we have . Because there is only one such anticommutation, the operators as a whole anticommute:
Similarly, for the second cycle:
We have two pairs of operators, and , that each obey the algebra of a logical qubit's Pauli operators (e.g., ). Since these two pairs act independently (commute with each other), they describe two independent logical qubits.
A system of two independent qubits has a -dimensional state space. This implies that the ground state subspace of the toric code on a torus is four-fold degenerate.
We can explicitly construct these four states.
1. Start with one ground state, , which satisfies , for all s, p. Let's define it as a +1 eigenstate of the electric operators:
2. We can now generate the other three orthogonal ground states by acting with the magnetic 't Hooft loop operators. Since anticommutes with , acting with it flips the eigenvalue of from +1 to -1.
* . This state has eigenvalues for . * . This state has eigenvalues for . * . This state has eigenvalues for .
These four states are all degenerate in energy (they are all +1 eigenstates of all local and operators), and they are mutually orthogonal. They form a basis for the 4-dimensional ground state subspace.
It is possible to define similar codes using higher-dimensional spins. These are the quantum double models [22] and string-net models, [23] which allow a greater richness in the behaviour of anyons, and so may be used for more advanced quantum computation and error correction proposals. [24] These not only include models with Abelian anyons, but also those with non-Abelian statistics. [25] [26] [27]
The most explicit demonstration of the properties of the toric code has been in state based approaches. Rather than attempting to realize the Hamiltonian, these simply prepare the code in the stabilizer space. Using this technique, experiments have been able to demonstrate the creation, transport and statistics of the anyons [28] [29] [30] and measurement of the topological entanglement entropy. [30] More recent experiments have also been able to demonstrate the error correction properties of the code. [31] [30]
For realizations of the toric code and its generalizations with a Hamiltonian, much progress has been made using Josephson junctions. The theory of how the Hamiltonians may be implemented has been developed for a wide class of topological codes. [32] An experiment has also been performed, realizing the toric code Hamiltonian for a small lattice, and demonstrating the quantum memory provided by its degenerate ground state. [33]
Other theoretical and experimental works towards realizations are based on cold atoms. A toolkit of methods that may be used to realize topological codes with optical lattices has been explored, [34] as have experiments concerning minimal instances of topological order. [35] Such minimal instances of the toric code has been realized experimentally within isolated square plaquettes. [36] Progress is also being made into simulations of the toric model with Rydberg atoms, in which the Hamiltonian and the effects of dissipative noise can be demonstrated. [37] [38] Experiments in Rydberg atom arrays have also successfully realized the toric code with periodic boundary conditions in two dimensions by coherently transporting arrays of entangled atoms. [39]