PostBQP

Last updated

In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is correct at least 2/3 of the time on all inputs).

Contents

Postselection is not considered to be a feature that a realistic computer (even a quantum one) would possess, but nevertheless postselecting machines are interesting from a theoretical perspective.

Removing either one of the two main features (quantumness, postselection) from PostBQP gives the following two complexity classes, both of which are subsets of PostBQP:

The addition of postselection seems to make quantum Turing machines much more powerful: Scott Aaronson proved [2] [3] PostBQP is equal to PP , a class which is believed to be relatively powerful, whereas BQP is not known even to contain the seemingly smaller class NP . Using similar techniques, Aaronson also proved that small changes to the laws of quantum computing would have significant effects. As specific examples, under either of the two following changes, the "new" version of BQP would equal PP :

Basic properties

In order to describe some of the properties of PostBQP we fix a formal way of describing quantum postselection. Define a quantum algorithm to be a family of quantum circuits (specifically, a uniform circuit family). We designate one qubit as the postselection qubit P and another as the output qubit Q. Then PostBQP is defined by postselecting upon the event that the postselection qubit is . Explicitly, a language L is in PostBQP if there is a quantum algorithm A so that after running A on input x and measuring the two qubits P and Q,

One can show that allowing a single postselection step at the end of the algorithm (as described above) or allowing intermediate postselection steps during the algorithm are equivalent. [2] [4]

Here are three basic properties of PostBQP (which also hold for BQP via similar proofs):

  1. PostBQP is closed under complement. Given a language L in PostBQP and a corresponding deciding circuit family, create a new circuit family by flipping the output qubit after measurement, then the new circuit family proves the complement of L is in PostBQP.
  2. You can do probability amplification in PostBQP. The definition of PostBQP is not changed if we replace the 2/3 value in its definition by any other constant strictly between 1/2 and 1. As an example, given a PostBQP algorithm A with success probability 2/3, we can construct another algorithm which runs three independent copies of A, outputs a postselection bit equal to the conjunction of the three "inner" ones, and outputs an output bit equal to the majority of the three "inner" ones; the new algorithm will be correct with conditional probability , greater than the original 2/3.
  3. PostBQP is closed under intersection . Suppose we have PostBQP circuit families for two languages and , with respective postselection qubits and output qubits . We may assume by probability amplification that both circuit families have success probability at least 5/6. Then we create a composite algorithm where the circuits for and are run independently, and we set P to the conjunction of and , and Q to the conjunction of and . It is not hard to see by a union bound that this composite algorithm correctly decides membership in with (conditional) probability at least 2/3.

More generally, combinations of these ideas show that PostBQP is closed under union and BQP truth-table reductions.

PostBQP = PP

Scott Aaronson showed [5] that the complexity classes (postselected bounded error quantum polynomial time) and PP (probabilistic polynomial time) are equal. The result was significant because this quantum computation reformulation of gave new insight and simpler proofs of properties of .

The usual definition of a circuit family is one with two outbit qubits P (postselection) and Q (output) with a single measurement of P and Q at the end such that the probability of measuring P = 1 has nonzero probability, the conditional probability Pr[Q = 1|P = 1] ≥ 2/3 if the input x is in the language, and Pr[Q = 0|P = 1] ≥ 2/3 if the input x is not in the language. For technical reasons we tweak the definition of as follows: we require that Pr[P = 1] ≥ 2nc for some constant c depending on the circuit family. Note this choice does not affect the basic properties of , and also it can be shown that any computation consisting of typical gates (e.g. Hadamard, Toffoli) has this property whenever Pr[P = 1] > 0.

Proving PostBQPPP

Suppose we are given a family of circuits to decide a language L. We assume without loss of generality (e.g. see the inessential properties of quantum computers) that all gates have transition matrices that are represented with real numbers, at the expense of adding one more qubit.

Let Ψ denote the final quantum state of the circuit before the postselecting measurement is made. The overall goal of the proof is to construct a algorithm to decide L. More specifically it suffices to have L correctly compare the squared amplitude of Ψ in the states with Q = 1, P = 1 to the squared amplitude of Ψ in the states with Q = 0, P = 1 to determine which is bigger. The key insight is that the comparison of these amplitudes can be transformed into comparing the acceptance probability of a machine with 1/2.

Matrix view of PostBQP algorithms

Let n denote the input size, B = B(n) denote the total number of qubits in the circuit (inputs, ancillary, output and postselection qubits), and G = G(n) denote the total number of gates. Represent the ith gate by its transition matrix Ai (a real unitary matrix) and let the initial state be (padded with zeroes). Then . Define S1 (resp. S0) to be the set of basis states corresponding to P = 1, Q = 1 (resp. P = 1, Q = 0) and define the probabilities

The definition of ensures that either or according to whether x is in L or not.

Our machine will compare and . In order to do this, we expand the definition of matrix multiplication:

where the sum is taken over all lists of G basis vectors . Now and can be expressed as a sum of pairwise products of these terms. Intuitively, we want to design a machine whose acceptance probability is something like , since then would imply that the acceptance probability is , while would imply that the acceptance probability is .

Technicality: we may assume entries of the transition matrices Ai are rationals with denominator 2f(n) for some polynomial f(n).

The definition of tells us that if x is in L, and that otherwise . Let us replace all entries of A by the nearest fraction with denominator for a large polynomial that we presently describe. What will be used later is that the newπ values satisfy if x is in L, and if x is not in L. Using the earlier technical assumption and by analyzing how the 1-norm of the computational state changes, this is seen to be satisfied if thus clearly there is a large enough f that is polynomial in n.

Constructing the PP machine

Now we provide the detailed implementation of our machine. Let α denote the sequence and define the shorthand notation

,

then

We define our machine to

Then it is straightforward to compute that this machine accepts with probability so this is a machine for the language L, as needed.

Proving PPPostBQP

Suppose we have a machine with time complexity on input x of length . Thus the machine flips a coin at most T times during the computation. We can thus view the machine as a deterministic function f (implemented, e.g. by a classical circuit) which takes two inputs (x, r) where r, a binary string of length T, represents the results of the random coin flips that are performed by the computation, and the output of f is 1 (accept) or 0 (reject). The definition of tells us that

Thus, we want a algorithm that can determine whether the above statement is true.

Define s to be the number of random strings which lead to acceptance,

and so is the number of rejected strings. It is straightforward to argue that without loss of generality, ; for details, see a similar without loss of generality assumption in the proof that is closed under complementation.

Aaronson's algorithm

Aaronson's algorithm for solving this problem is as follows. For simplicity, we will write all quantum states as unnormalized. First, we prepare the state . Second, we apply Hadamard gates to the second register (each of the first T qubits), measure the second register and postselect on it being the all-zero string. It is easy to verify that this leaves the last register (the last qubit) in the residual state

Where H denotes the Hadamard gate, we compute the state

.

Where are positive real numbers to be chosen later with , we compute the state and measure the second qubit, postselecting on its value being equal to 1, which leaves the first qubit in a residual state depending on which we denote

.

Visualizing the possible states of a qubit as a circle, we see that if , (i.e. if ) then lies in the open quadrant while if , (i.e. if ) then lies in the open quadrant . In fact for any fixed x (and its corresponding s), as we vary the ratio in , note that the image of is precisely the corresponding open quadrant. In the rest of the proof, we make precise the idea that we can distinguish between these two quadrants.

Analysis

Let , which is the center of , and let be orthogonal to . Any qubit in , when measured in the basis , gives the value less than 1/2 of the time. On the other hand, if and we had picked then measuring in the basis would give the value all of the time. Since we don't know s we also don't know the precise value of r*, but we can try several (polynomially many) different values for in hopes of getting one that is "near" r*.

Specifically, note and let us successively set to every value of the form for . Then elementary calculations show that for one of these values of i, the probability that the measurement of in the basis yields is at least

Overall, the algorithm is as follows. Let k be any constant strictly between 1/2 and . We do the following experiment for each : construct and measure in the basis a total of times where C is a constant. If the proportion of measurements is greater than k, then reject. If we don't reject for any i, accept. Chernoff bounds then show that for a sufficiently large universal constant C, we correctly classify x with probability at least 2/3.

Note that this algorithm satisfies the technical assumption that the overall postselection probability is not too small: each individual measurement of has postselection probability and so the overall probability is .

Implications

Related Research Articles

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

Shor's algorithm is a quantum computer algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor.

<span class="mw-page-title-main">Quantum harmonic oscillator</span> Important, well-understood quantum mechanical model

The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, refers to a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.

<span class="mw-page-title-main">Phonon</span> Quasiparticle of mechanical vibrations

In physics, a phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, specifically in solids and some liquids. A type of quasiparticle, a phonon is an excited state in the quantum mechanical quantization of the modes of vibrations for elastic structures of interacting particles. Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves.

In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics can not be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint on the statistical occurrence of "coincidences" in a Bell test which is necessarily true if there exist underlying local hidden variables, an assumption that is sometimes termed local realism. It is in fact the case that the inequality is routinely violated by modern experiments in quantum mechanics.

In physics, specifically in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator, often described as a state which has dynamics most closely resembling the oscillatory behavior of a classical harmonic oscillator. It was the first example of quantum dynamics when Erwin Schrödinger derived it in 1926, while searching for solutions of the Schrödinger equation that satisfy the correspondence principle. The quantum harmonic oscillator arise in the quantum theory of a wide range of physical systems. For instance, a coherent state describes the oscillating motion of a particle confined in a quadratic potential well. The coherent state describes a state in a system for which the ground-state wavepacket is displaced from the origin of the system. This state can be related to classical solutions by a particle oscillating with an amplitude equivalent to the displacement.

<span class="mw-page-title-main">Rabi cycle</span> Quantum mechanical phenomenon

In physics, the Rabi cycle is the cyclic behaviour of a two-level quantum system in the presence of an oscillatory driving field. A great variety of physical processes belonging to the areas of quantum computing, condensed matter, atomic and molecular physics, and nuclear and particle physics can be conveniently studied in terms of two-level quantum mechanical systems, and exhibit Rabi flopping when coupled to an oscillatory driving field. The effect is important in quantum optics, magnetic resonance and quantum computing, and is named after Isidor Isaac Rabi.

<span class="mw-page-title-main">Propagator</span> Function in quantum field theory showing probability amplitudes of moving particles

In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. These may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.

Creation operators and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. They were introduced by Paul Dirac.

<span class="mw-page-title-main">LSZ reduction formula</span> Connection between correlation functions and the S-matrix

In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Quasiprobabilities share several of general features with ordinary probabilities, such as, crucially, the ability to yield expectation values with respect to the weights of the distribution. However, they can violate the σ-additivity axiom: integrating over them does not necessarily yield probabilities of mutually exclusive states. Indeed, quasiprobability distributions also have regions of negative probability density, counterintuitively, contradicting the first axiom. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere.

Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.

In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.

Amplitude amplification is a technique in quantum computing which generalizes the idea behind the Grover's search algorithm, and gives rise to a family of quantum algorithms. It was discovered by Gilles Brassard and Peter Høyer in 1997, and independently rediscovered by Lov Grover in 1998.

In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the set of languages for which, when a string is in the language, there is a polynomial-size quantum proof that convinces a polynomial time quantum verifier of this fact with high probability. Moreover, when the string is not in the language, every polynomial-size quantum state is rejected by the verifier with high probability.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith.

Quantum counting algorithm is a quantum algorithm for efficiently counting the number of solutions for a given search problem. The algorithm is based on the quantum phase estimation algorithm and on Grover's search algorithm.

<span class="mw-page-title-main">Mølmer–Sørensen gate</span> Trapped-ion quantum gate

In quantum computing, Mølmer–Sørensen gate scheme refers to an implementation procedure for various multi-qubit quantum logic gates used mostly in trapped ion quantum computing. This procedure is based on the original proposition by Klaus Mølmer and Anders Sørensen in 1999-2000.

References

  1. Y. Han and Hemaspaandra, L. and Thierauf, T. (1997). "Threshold computation and cryptographic security". SIAM Journal on Computing . 26: 59–78. CiteSeerX   10.1.1.23.510 . doi:10.1137/S0097539792240467.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  2. 1 2 Aaronson, Scott (2005). "Quantum computing, postselection, and probabilistic polynomial-time". Proceedings of the Royal Society A . 461 (2063): 3473–3482. arXiv: quant-ph/0412187 . Bibcode:2005RSPSA.461.3473A. doi:10.1098/rspa.2005.1546. S2CID   1770389.. Preprint available at
  3. Aaronson, Scott (2004-01-11). "Complexity Class of the Week: PP". Computational Complexity Weblog. Retrieved 2008-05-02.
  4. Ethan Bernstein & Umesh Vazirani (1997). "Quantum Complexity Theory". SIAM Journal on Computing. 26 (5): 11–20. CiteSeerX   10.1.1.144.7852 . doi:10.1137/s0097539796300921.
  5. Aaronson, Scott (2005). "Quantum computing, postselection, and probabilistic polynomial-time". Proceedings of the Royal Society A. 461 (2063): 3473–3482. arXiv: quant-ph/0412187 . Bibcode:2005RSPSA.461.3473A. doi:10.1098/rspa.2005.1546. S2CID   1770389.