Optical cluster states are a proposed tool to achieve quantum computational universality in linear optical quantum computing (LOQC). [1] As direct entangling operations with photons often require nonlinear effects, probabilistic generation of entangled resource states has been proposed as an alternative path to the direct approach.
On a silicon photonic chip, one of the most common platforms for implementing LOQC, there are two typical choices for encoding quantum information, though many more options exist. [2] Photons have useful degrees of freedom in the spatial modes of the possible photon paths or in the polarization of the photons themselves. The way in which a cluster state is generated varies with which encoding has been chosen for implementation.
Storing information in the spatial modes of the photon paths is often referred to as dual rail encoding. In a simple case, one might consider the situation where a photon has two possible paths, a horizontal path with creation operator and a vertical path with creation operator , where the logical zero and one states are then represented by
and
Single qubit operations are then performed by beam splitters, which allow manipulation of the relative superposition weights of the modes, and phase shifters, which allow manipulation of the relative phases of the two modes. This type of encoding lends itself to the Nielsen protocol for generating cluster states. In encoding with photon polarization, logical zero and one can be encoded via the horizontal and vertical states of a photon, e.g.
and
Given this encoding, single qubit operations can be performed using waveplates. This encoding can be used with the Browne-Rudolph protocol.
In 2004, Nielsen proposed a protocol to create cluster states, [3] borrowing techniques from the Knill-Laflamme-Milburn protocol (KLM protocol) to probabilistically create controlled-Z connections between qubits which, when performed on a pair of states (normalization being ignored), forms the basis for cluster states. While the KLM protocol requires error correction and a fairly large number of modes in order to get very high probability two-qubit gate, Nielsen's protocol only requires a success probability per gate of greater than one half. Given that the success probability for a connection using ancilla photons is , relaxation of the success probability from nearly one to anything over one half presents a major advantage in resources, as well as simply reducing the number of required elements in the photonic circuit.
To see how Nielsen brought about this improvement, consider the photons being generated for qubits as vertices on a two dimensional grid, and the controlled-Z operations being probabilistically added edges between nearest neighbors. Using results from percolation theory, it can be shown that as long as the probability of adding edges is above a certain threshold, there will exist a complete grid as a sub-graph with near unit probability. Because of this, Nielsen's protocol doesn't rely on every individual connection being successful, just enough of them that the connections between photons allow a grid.
Among the first proposals of utilizing resource states for optical quantum computing was the Yoran-Reznik protocol in 2003. [4] While the proposed resource in this protocol was not exactly a cluster state, it brought many of the same key concepts to the attention of those considering the possibilities of optical quantum computing and still required connecting multiple separate one-dimensional chains of entangled photons via controlled-Z operations. This protocol is somewhat unique in that it utilizes both the spatial mode degree of freedom along with the polarization degree of freedom to help entanglement between qubits.
Given a horizontal path, denoted by , and a vertical path, denoted by , a 50:50 beam splitter connecting the paths followed by a -phase shifter on path , we can perform the transformations
where denotes a photon with polarization on path . In this way, we have the path of the photon entangled with its polarization. This is sometimes referred to as hyperentanglement, a situation in which the degrees of freedom of a single particle are entangled with each other. This, paired with the Hong-Ou-Mandel effect and projective measurements on the polarization state, can be used to create path entanglement between photons in a linear chain.
These one-dimensional chains of entangled photons still need to be connected via controlled-Z operations, similar to the KLM protocol. These controlled-Z connection s between chains are still probabilistic, relying on measurement dependent teleportation with special resource states. However, due to the fact that this method does not include Fock measurements on the photons being used for computation as the KLM protocol does, the probabilistic nature of implementing controlled-Z operations presents much less of a problem. In fact, as long as connections occur with probability greater than one half, the entanglement present between chains will be enough to perform useful quantum computation, on average.
An alternative approach to building cluster states that focuses entirely on photon polarization is the Browne-Rudolph protocol. [5] This method rests on performing parity checks on a pair of photons to stitch together already entangled sets of photons, meaning that this protocol requires entangled photon sources. Browne and Rudolph proposed two ways of doing this, called type-I and type-II fusion.
In type-I fusion, photons with either vertical or horizontal polarization are injected into modes and , connected by a polarizing beam splitter. Each of the photons sent into this system is part of a Bell pair that this method will try to entangle. Upon passing through the polarizing beam splitter, the two photons will go opposite ways if they have the same polarization or the same way if they have the opposite polarization, e.g.
or
Then on one of these modes, a projective measurement onto the basis is performed. If the measurement is successful, i.e. if it detects anything, then the detected photon is destroyed, but the remaining photons from the Bell pairs become entangled. Failure to detect anything results in an effective loss of the involved photons in a way that breaks any chain of entangled photons they were on. This can make attempting to make connections between already developed chains potentially risky.
Type-II fusion works similarly to type-I fusion, with the differences being that a diagonal polarizing beam splitter is used and the pair of photons is measured in the two-qubit Bell basis. A successful measurement here involves measuring the pair to be in a Bell state with no relative phase between the superposition of states (e.g. as opposed to ). This again entangles any two clusters already formed. A failure here performs local complementation on the local subgraph, making an existing chain shorter rather than cutting it in half. In this way, while it requires the use of more qubits in combining entangled resources, the potential loss for attempts to connect two chains together are not as expensive for type-II fusion as they are for type-I fusion.
Once a cluster state has been successfully generated, computation can be done with the resource state directly by applying measurements to the qubits on the lattice. This is the model of measurement-based quantum computation (MQC), and it is equivalent to the circuit model.
Logical operations in MQC come about from the byproduct operators that occur during quantum teleportation. For example, given a single qubit state , one can connect this qubit to a plus state via a two-qubit controlled-Z operation. Then, upon measuring the first qubit (the original ) in the Pauli-X basis, the original state of the first qubit is teleported to the second qubit with a measurement outcome dependent extra rotation, which one can see from the partial inner product of the measurement acting on the two-qubit state:
for denoting the measurement outcome as either the eigenstate of Pauli-X for or the eigenstate for . A two qubit state connected by a pair of controlled-Z operations to the state yields a two-qubit operation on the teleported state after measuring the original qubits:
for measurement outcomes and . This basic concept extends to arbitrarily many qubits, and thus computation is performed by the byproduct operators of teleportation down a chain. Adjusting the desired single-qubit gates is simply a matter of adjusting the measurement basis on each qubit, and non-Pauli measurements are necessary for universal quantum computation.
Path-entangled two qubit states have been generated in laboratory settings on silicon photonic chips in recent years, making important steps in the direction of generating optical cluster states. Among methods of doing this, it has been shown experimentally that spontaneous four-wave mixing can be used with the appropriate use of microring resonators and other waveguides for filtering to perform on-chip generation of two-photon Bell states, which are equivalent to two-qubit cluster states up to local unitary operations.
To do this, a short laser pulse is injected into an on-chip waveguide that splits into two paths. This forces the pulse into a superposition of the possible directions it could go. The two paths are coupled to microring resonators that allow circulation of the laser pulse until spontaneous four-wave mixing occurs, taking two photons from the laser pulse and converting them into a pair of photons, called the signal and idler with different frequencies in a way that conserves energy. In order to prevent the generation of multiple photon pairs at once, the procedure takes advantage of the conservation of energy and ensures that there is only enough energy in the laser pulse to create a single pair of photons. Because of this restriction, spontaneous four-wave mixing can only occur in one of the microring resonators at a time, meaning that the superposition of paths that the laser pulse could take is converted into a superposition of paths the two photons could be on. Mathematically, if denotes the laser pulse, the paths are labeled as and , the process can be written as
where is the representation of having of photon on path . With the state of the two photons being in this kind of superposition, they are entangled, which can be verified by tests of Bell inequalities.
Polarization entangled photon pairs have also been produced on-chip. [6] The setup involves a silicon wire waveguide that is split in half by a polarization rotator. This process, like the entanglement generation described for the dual rail encoding, makes use of the nonlinear process of spontaneous four-wave mixing, which can occur in the silicon wire on either side of the polarization rotator. However, the geometry of these wires are designed such that horizontal polarization is preferred in the conversion of laser pump photons to signal and idler photons. Thus when the photon pair is generated, both photons should have the same polarization, i.e.
The polarization rotator is then designed with the specific dimensions such that horizontal polarization is switched to vertical polarization. Thus any pairs of photons generated before the rotator exit the waveguide with vertical polarization and any pairs generated on the other end of the wire exit the waveguide still having horizontal polarization. Mathematically, the process is, up to overall normalization,
Assuming that equal space on each side of the rotator makes spontaneous four-wave mixing equally likely one each side, the output state of the photons is maximally entangled:
States generated this way could potentially be used to build a cluster state using the Browne-Rudolph protocol.
Quantum teleportation is a technique for transferring quantum information from a sender at one location to a receiver some distance away. While teleportation is commonly portrayed in science fiction as a means to transfer physical objects from one location to the next, quantum teleportation only transfers quantum information. The sender does not have to know the particular quantum state being transferred. Moreover, the location of the recipient can be unknown, but to complete the quantum teleportation, classical information needs to be sent from sender to receiver. Because classical information needs to be sent, quantum teleportation cannot occur faster than the speed of light.
In quantum computing, a qubit or quantum bit is a basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit is a two-state quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two states can be taken to be the vertical polarization and the horizontal polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of both states simultaneously, a property that is fundamental to quantum mechanics and quantum computing.
Quantum entanglement is the phenomenon that occurs when a group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.
In quantum mechanics, a density matrix is a matrix that describes the quantum state of a physical system. It allows for the calculation of the probabilities of the outcomes of any measurement performed upon this system, using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed states. Mixed states arise in quantum mechanics in two different situations:
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint on the statistical occurrence of "coincidences" in a Bell test which is necessarily true if there exist underlying local hidden variables, an assumption that is sometimes termed local realism. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum preparation, and faulty measurements. This would allow algorithms of greater circuit depth.
The Bell's states or EPR pairs are specific quantum states of two qubits that represent the simplest examples of quantum entanglement; conceptually, they fall under the study of quantum information science. The Bell's states are a form of entangled and normalized basis vectors. This normalization implies that the overall probability of the particle being in one of the mentioned states is 1: . Entanglement is a basis-independent result of superposition. Due to this superposition, measurement of the qubit will "collapse" it into one of its basis states with a given probability. Because of the entanglement, measurement of one qubit will "collapse" the other qubit to a state whose measurement will yield one of two possible values, where the value depends on which Bell's state the two qubits are in initially. Bell's states can be generalized to certain quantum states of multi-qubit systems, such as the GHZ state for 3 or more subsystems.
In quantum information theory, superdense coding is a quantum communication protocol to communicate a number of classical bits of information by only transmitting a smaller number of qubits, under the assumption of sender and receiver pre-sharing an entangled resource. In its simplest form, the protocol involves two parties, often referred to as Alice and Bob in this context, which share a pair of maximally entangled qubits, and allows Alice to transmit two bits to Bob by sending only one qubit. This protocol was first proposed by Charles H. Bennett and Stephen Wiesner in 1970 and experimentally actualized in 1996 by Klaus Mattle, Harald Weinfurter, Paul G. Kwiat and Anton Zeilinger using entangled photon pairs. Superdense coding can be thought of as the opposite of quantum teleportation, in which one transfers one qubit from Alice to Bob by communicating two classical bits, as long as Alice and Bob have a pre-shared Bell pair.
LOCC, or local operations and classical communication, is a method in quantum information theory where a local (product) operation is performed on part of the system, and where the result of that operation is "communicated" classically to another part where usually another local operation is performed conditioned on the information received.
In physics, in the area of quantum information theory, a Greenberger–Horne–Zeilinger state is a certain type of entangled quantum state that involves at least three subsystems. It was first studied by Daniel Greenberger, Michael Horne and Anton Zeilinger in 1989. Extremely non-classical properties of the state have been observed.
The W state is an entangled quantum state of three qubits which in the bra-ket notation has the following shape
In quantum computing, a graph state is a special type of multi-qubit state that can be represented by a graph. Each qubit is represented by a vertex of the graph, and there is an edge between every interacting pair of qubits. In particular, they are a convenient way of representing certain types of entangled states.
Time-bin encoding is a technique used in quantum information science to encode a qubit of information on a photon. Quantum information science makes use of qubits as a basic resource similar to bits in classical computing. Qubits are any two-level quantum mechanical system; there are many different physical implementations of qubits, one of which is time-bin encoding.
The one-way or measurement-based quantum computer (MBQC) is a method of quantum computing that first prepares an entangled resource state, usually a cluster state or graph state, then performs single qubit measurements on it. It is "one-way" because the resource state is destroyed by the measurements.
In quantum information and quantum computing, a cluster state is a type of highly entangled state of multiple qubits. Cluster states are generated in lattices of qubits with Ising type interactions. A cluster C is a connected subset of a d-dimensional lattice, and a cluster state is a pure state of the qubits located on C. They are different from other types of entangled states such as GHZ states or W states in that it is more difficult to eliminate quantum entanglement in the case of cluster states. Another way of thinking of cluster states is as a particular instance of graph states, where the underlying graph is a connected subset of a d-dimensional lattice. Cluster states are especially useful in the context of the one-way quantum computer. For a comprehensible introduction to the topic see.
Entanglement distillation is the transformation of N copies of an arbitrary entangled state into some number of approximately pure Bell pairs, using only local operations and classical communication.
Quantum complex networks are complex networks whose nodes are quantum computing devices. Quantum mechanics has been used to create secure quantum communications channels that are protected from hacking. Quantum communications offer the potential for secure enterprise-scale solutions.
The KLM scheme or KLM protocol is an implementation of linear optical quantum computing (LOQC), developed in 2000 by Emanuel Knill, Raymond Laflamme and Gerard J. Milburn. This protocol makes it possible to create universal quantum computers solely with linear optical tools. The KLM protocol uses linear optical elements, single-photon sources and photon detectors as resources to construct a quantum computation scheme involving only ancilla resources, quantum teleportations and error corrections.
Consider two remote players, connected by a channel, that don't trust each other. The problem of them agreeing on a random bit by exchanging messages over this channel, without relying on any trusted third party, is called the coin flipping problem in cryptography. Quantum coin flipping uses the principles of quantum mechanics to encrypt messages for secure communication. It is a cryptographic primitive which can be used to construct more complex and useful cryptographic protocols, e.g. Quantum Byzantine agreement.