Simon's problem

Last updated

In computational complexity theory and quantum computing, Simon's problem is a computational problem that is proven to be solved exponentially faster on a quantum computer than on a classical (that is, traditional) computer. The quantum algorithm solving Simon's problem, usually called Simon's algorithm, served as the inspiration for Shor's algorithm. [1] Both problems are special cases of the abelian hidden subgroup problem, which is now known to have efficient quantum algorithms.

Contents

The problem is set in the model of decision tree complexity or query complexity and was conceived by Daniel R. Simon in 1994. [2] Simon exhibited a quantum algorithm that solves Simon's problem exponentially faster with exponentially fewer queries than the best probabilistic (or deterministic) classical algorithm. In particular, Simon's algorithm uses a linear number of queries and any classical probabilistic algorithm must use an exponential number of queries.

This problem yields an oracle separation between the complexity classes BPP (bounded-error classical query complexity) and BQP (bounded-error quantum query complexity). [3] This is the same separation that the Bernstein–Vazirani algorithm achieves, and different from the separation provided by the Deutsch–Jozsa algorithm, which separates P and EQP. Unlike the Bernstein–Vazirani algorithm, Simon's algorithm's separation is exponential.

Because this problem assumes the existence of a highly-structured "black box" oracle to achieve its speedup, this problem has little practical value. [4] However, without such an oracle, exponential speedups cannot easily be proven, since this would prove that P is different from PSPACE.

Problem description

Given a function (implemented by a black box or oracle) with the promise that, for some unknown , for all ,

if and only if ,

where denotes bitwise XOR. The goal is to identify by making as few queries to as possible. Note that

if and only if

Furthermore, for some and in , is unique (not equal to ) if and only if . This means that is two-to-one when , and one-to-one when . It is also the case that implies , meaning that

which shows how is periodic.

The associated decision problem formulation of Simon's problem is to distinguish when ( is one-to-one), and when ( is two-to-one).

Example

The following function is an example of a function that satisfies the required property for :

000101
001010
010000
011110
100000
101110
110101
111010

In this case, (i.e. the solution). Every output of occurs twice, and the two input strings corresponding to any one given output have bitwise XOR equal to .

For example, the input strings and are both mapped (by ) to the same output string . That is, and . Applying XOR to 010 and 100 obtains 110, that is

can also be verified using input strings 001 and 111 that are both mapped (by f) to the same output string 010. Applying XOR to 001 and 111 obtains 110, that is . This gives the same solution as before.

In this example the function f is indeed a two-to-one function where .

Problem hardness

Intuitively, this is a hard problem to solve in a "classical" way, even if one uses randomness and accepts a small probability of error. The intuition behind the hardness is reasonably simple: if you want to solve the problem classically, you need to find two different inputs and for which . There is not necessarily any structure in the function that would help us to find two such inputs: more specifically, we can discover something about (or what it does) only when, for two different inputs, we obtain the same output. In any case, we would need to guess different inputs before being likely to find a pair on which takes the same output, as per the birthday problem. Since, classically to find s with a 100% certainty it would require checking inputs, Simon's problem seeks to find s using fewer queries than this classical method.

Simon's algorithm

Quantum circuit representing/implementing Simon's algorithm Simons algorithm.svg
Quantum circuit representing/implementing Simon's algorithm

The algorithm as a whole uses a subroutine to execute the following two steps:

  1. Run the quantum subroutine an expected times to get a list of linearly independent bitstrings .
  2. Each satisfies , so we can solve the system of equations this produces to get .

Quantum subroutine

The quantum circuit (see the picture) is the implementation of the quantum part of Simon's algorithm. The quantum subroutine of the algorithm makes use of the Hadamard transform

where , where denotes XOR.

First, the algorithm starts with two registers, initialized to . Then, we apply the Hadamard transform to the first register, which gives the state

Query the oracle to get the state

.

Apply another Hadamard transform to the first register. This will produce the state

Finally, we measure the first register (the algorithm also works if the second register is measured before the first, but this is unnecessary). The probability of measuring a state is

This is due to the fact that taking the magnitude of this vector and squaring it sums up all the probabilities of all the possible measurements of the second register that must have the first register as . There are two cases for our measurement:

  1. and is one-to-one.
  2. and is two-to-one.

For the first case,

since in this case, is one-to-one, implying that the range of is , meaning that the summation is over every basis vector. For the second case, note that there exist two strings, and , such that , where . Thus,

Furthermore, since , , and so

This expression is now easy to evaluate. Recall that we are measuring . When , then this expression will evaluate to , and when , then this expression will be .

Thus, both when and when , our measured satisfies .

Classical post-processing

We run the quantum part of the algorithm until we have a linearly independent list of bitstrings , and each satisfies . Thus, we can efficiently solve this system of equations classically to find .

The probability that are linearly independent is at least

Once we solve the system of equations, and produce a solution , we can test if . If this is true, then we know , since . If it is the case that , then that means that , and since is one-to-one.

We can repeat Simon's algorithm a constant number of times to increase the probability of success arbitrarily, while still having the same time complexity.

Explicit examples of Simon's algorithm for few qubits

One qubit

Consider the simplest instance of the algorithm, with . In this case evolving the input state through an Hadamard gate and the oracle results in the state (up to renormalization):

If , that is, , then measuring the second register always gives the outcome , and always results in the first register collapsing to the state (up to renormalization):

Thus applying an Hadamard and measuring the first register always gives the outcome . On the other hand, if is one-to-one, that is, , then measuring the first register after the second Hadamard can result in both and , with equal probability.

We recover from the measurement outcomes by looking at whether we measured always , in which case , or we measured both and with equal probability, in which case we infer that . This scheme will fail if but we nonetheless always found the outcome , but the probability of this event is with the number of performed measurements, and can thus be made exponentially small by increasing the statistics.

Two qubits

Consider now the case with . The initial part of the algorithm results in the state (up to renormalization):

If , meaning is injective, then finding on the second register always collapses the first register to , for all . In other words, applying Hadamard gates and measuring the first register the four outcomes are thus found with equal probability.

Suppose on the other hand , for example, . Then measuring on the second register collapses the first register to the state . And more generally, measuring gives on the first register. Applying Hadamard gates and measuring on the first register can thus result in the outcomes and with equal probabilities.

Similar reasoning applies to the other cases: if then the possible outcomes are and , while if the possible outcomes are and , compatibly with the rule discussed in the general case.

To recover we thus only need to distinguish between these four cases, collecting enough statistics to ensure that the probability of mistaking one outcome probability distribution for another is sufficiently small.

Complexity

Simon's algorithm requires queries to the black box, whereas a classical algorithm would need at least queries. It is also known that Simon's algorithm is optimal in the sense that any quantum algorithm to solve this problem requires queries. [5] [6]

See also

Related Research Articles

In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.

In quantum mechanics, identical particles are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to, elementary particles, composite subatomic particles, as well as atoms and molecules. Quasiparticles also behave in this way. Although all known indistinguishable particles only exist at the quantum scale, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics. They were first discussed by Werner Heisenberg and Paul Dirac in 1926.

Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.

The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

The Deutsch–Jozsa algorithm is a deterministic quantum algorithm proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998. Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm.

<span class="mw-page-title-main">Hadamard transform</span> Involutive change of basis in linear algebra

The Hadamard transform is an example of a generalized class of Fourier transforms. It performs an orthogonal, symmetric, involutive, linear operation on 2m real numbers.

In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.

<span class="mw-page-title-main">Controlled NOT gate</span> Quantum logic gate

In computer science, the controlled NOT gate, controlled-X gate, controlled-bit-flip gate, Feynman gate or controlled Pauli-X is a quantum logic gate that is an essential component in the construction of a gate-based quantum computer. It can be used to entangle and disentangle Bell states. Any quantum circuit can be simulated to an arbitrary degree of accuracy using a combination of CNOT gates and single qubit rotations. The gate is sometimes named after Richard Feynman who developed an early notation for quantum gate diagrams in 1986.

The hidden subgroup problem (HSP) is a topic of research in mathematics and theoretical computer science. The framework captures problems such as factoring, discrete logarithm, graph isomorphism, and the shortest vector problem. This makes it especially important in the theory of quantum computing because Shor's algorithm for factoring in quantum computing is an instance of the hidden subgroup problem for finite abelian groups, while the other problems correspond to finite groups that are not abelian.

Quantum walks are quantum analogues of classical random walks. In contrast to the classical random walk, where the walker occupies definite states and the randomness arises due to stochastic transitions between states, in quantum walks randomness arises through: (1) quantum superposition of states, (2) non-random, reversible unitary evolution and (3) collapse of the wave function due to state measurements.

The partition function or configuration integral, as used in probability theory, information theory and dynamical systems, is a generalization of the definition of a partition function in statistical mechanics. It is a special case of a normalizing constant in probability theory, for the Boltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associated probability measure, the Gibbs measure, has the Markov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks, and applications such as genomics, corpus linguistics and artificial intelligence, which employ Markov networks, and Markov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing the entropy for a fixed expectation value of the energy; this underlies the appearance of the partition function in maximum entropy methods and the algorithms derived therefrom.

<span class="mw-page-title-main">Mutually unbiased bases</span>

In quantum information theory, a set of bases in Hilbert space Cd are are said to be mutually unbiased to mean, that, if a system is prepared in an eigen state of one of the bases, then all outcomes of the measurement with respect to the other basis are predicted to occur with an equal probability inexorably equal to 1/d.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

In quantum computing, the quantum phase estimation algorithm is a quantum algorithm to estimate the phase corresponding to an eigenvalue of a given unitary operator. Because the eigenvalues of a unitary operator always have unit modulus, they are characterized by their phase, and therefore the algorithm can be equivalently described as retrieving either the phase or the eigenvalue itself. The algorithm was initially introduced by Alexei Kitaev in 1995.

In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

Quantum counting algorithm is a quantum algorithm for efficiently counting the number of solutions for a given search problem. The algorithm is based on the quantum phase estimation algorithm and on Grover's search algorithm.

<span class="mw-page-title-main">Bernstein–Vazirani algorithm</span> Quantum algorithm

The Bernstein–Vazirani algorithm, which solves the Bernstein–Vazirani problem, is a quantum algorithm invented by Ethan Bernstein and Umesh Vazirani in 1997. It is a restricted version of the Deutsch–Jozsa algorithm where instead of distinguishing between two different classes of functions, it tries to learn a string encoded in a function. The Bernstein–Vazirani algorithm was designed to prove an oracle separation between complexity classes BQP and BPP.

References

  1. Shor, Peter W. (1999-01-01). "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer". SIAM Review. 41 (2): 303–332. arXiv: quant-ph/9508027 . doi:10.1137/S0036144598347011. ISSN   0036-1445.
  2. Simon, Daniel R. (1997-10-01). "On the Power of Quantum Computation". SIAM Journal on Computing. 26 (5): 1474–1483. doi:10.1137/S0097539796298637. ISSN   0097-5397.
  3. Preskill, John (1998). Lecture Notes for Physics 229: Quantum Information and Computation. pp. 273–275.
  4. Aaronson, Scott (2018). Introduction to Quantum Information Science Lecture Notes (PDF). pp. 144–151.
  5. Koiran, P.; Nesme, V.; Portier, N. (2007), "The quantum query complexity of the Abelian hidden subgroup problem", Theoretical Computer Science, 380 (1–2): 115–126, doi:10.1016/j.tcs.2007.02.057 , retrieved 2011-06-06
  6. Koiran, P.; Nesme, V.; Portier, N. (2005), "A quantum lower bound for the query complexity of Simon's Problem", Proc. ICALP, 3580: 1287–1298, arXiv: quant-ph/0501060 , Bibcode:2005quant.ph..1060K , retrieved 2011-06-06