Hamiltonian simulation (also referred to as quantum simulation) is a problem in quantum information science that attempts to find the computational complexity and quantum algorithms needed for simulating quantum systems. Hamiltonian simulation is a problem that demands algorithms which implement the evolution of a quantum state efficiently. The Hamiltonian simulation problem was proposed by Richard Feynman in 1982, where he proposed a quantum computer as a possible solution since the simulation of general Hamiltonians seem to grow exponentially with respect to the system size. [1]
In the Hamiltonian simulation problem, given a Hamiltonian ( hermitian matrix acting on qubits), a time and maximum simulation error , the goal is to find an algorithm that approximates such that , where is the ideal evolution and is the spectral norm. A special case of the Hamiltonian simulation problem is the local Hamiltonian simulation problem. This is when is a k-local Hamiltonian on qubits where and acts non-trivially on at most qubits instead of qubits. [2] The local Hamiltonian simulation problem is important because most Hamiltonians that occur in nature are k-local. [2]
Also known as Trotter formulas or Trotter–Suzuki decompositions, Product formulas simulate the sum-of-terms of a Hamiltonian by simulating each one separately for a small time slice. [3] [4] If , then for a large ; where is the number of time steps to simulate for. The larger the , the more accurate the simulation.
If the Hamiltonian is represented as a Sparse matrix, the distributed edge coloring algorithm can be used to decompose it into a sum of terms; which can then be simulated by a Trotter–Suzuki algorithm. [5]
by the Taylor series expansion. [6] This says that during the evolution of a quantum state, the Hamiltonian is applied over and over again to the system with a various number of repetitions. The first term is the identity matrix so the system doesn't change when it is applied, but in the second term the Hamiltonian is applied once. For practical implementations, the series has to be truncated , where the bigger the , the more accurate the simulation. [7] This truncated expansion is then implemented via the linear combination of unitaries (LCU) technique for Hamiltonian simulation. [6] Namely, one decomposes the Hamiltonian such that each is unitary (for instance, the Pauli operators always provide such a basis), and so each is also a linear combination of unitaries.
In the quantum walk, a unitary operation whose spectrum is related to the Hamiltonian is implemented then the Quantum phase estimation algorithm is used to adjust the eigenvalues. This makes it unnecessary to decompose the Hamiltonian into a sum-of-terms like the Trotter-Suzuki methods. [6]
The quantum signal processing algorithm works by transducing the eigenvalues of the Hamiltonian into an ancilla qubit, transforming the eigenvalues with single qubit rotations and finally projecting the ancilla. [8] It has been proved to be optimal in query complexity when it comes to Hamiltonian simulation. [8]
The table of the complexities of the Hamiltonian simulation algorithms mentioned above. The Hamiltonian simulation can be studied in two ways. This depends on how the Hamiltonian is given. If it is given explicitly, then gate complexity matters more than query complexity. If the Hamiltonian is described as an Oracle (black box) then the number of queries to the oracle is more important than the gate count of the circuit. The following table shows the gate and query complexity of the previously mentioned techniques.
Technique | Gate complexity | Query complexity |
---|---|---|
Product formula 1st order | [7] | [9] |
Taylor series | [7] | [6] |
Quantum walk | [7] | [5] |
Quantum signal processing | [7] | [8] |
Where is the largest entry of .
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
In quantum information theory, a quantum circuit is a model for quantum computation, similar to classical circuits, in which a computation is a sequence of quantum gates, measurements, initializations of qubits to known values, and possibly other actions. The minimum set of actions that a circuit needs to be able to perform on the qubits to enable quantum computation is known as DiVincenzo's criteria.
In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.
Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to perform calculations and is closely related to quantum annealing.
In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the set of languages for which, when a string is in the language, there is a polynomial-size quantum proof that convinces a polynomial time quantum verifier of this fact with high probability. Moreover, when the string is not in the language, every polynomial-size quantum state is rejected by the verifier with high probability.
In quantum computing, the quantum phase estimation algorithm is a quantum algorithm to estimate the phase corresponding to an eigenvalue of a given unitary operator. Because the eigenvalues of a unitary operator always have unit modulus, they are characterized by their phase, and therefore the algorithm can be equivalently described as retrieving either the phase or the eigenvalue itself. The algorithm was initially introduced by Alexei Kitaev in 1995.
In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith. With small modifications to the QFT, it can also be used for performing fast integer arithmetic operations such as addition and multiplication.
The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate expectation values of permanents of matrices. The model consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement computing tasks which are hard to implement with classical computers by using far fewer physical resources than a full linear-optical quantum computing setup. This advantage makes it an ideal candidate for demonstrating the power of quantum computation in the near term.
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.
In quantum information theory and operator theory, the Choi–Jamiołkowski isomorphism refers to the correspondence between quantum channels and quantum states, this is introduced by Man-Duen Choi and Andrzej Jamiołkowski. It is also called channel-state duality by some authors in the quantum information area, but mathematically, this is a more general correspondence between positive operators and the complete positive superoperators.
The One Clean Qubit model of computation is performed an qubit system with one pure state and maximally mixed states. This model was motivated by highly mixed states that are prevalent in Nuclear magnetic resonance quantum computers. It's described by the density matrix , where I is the identity matrix. In computational complexity theory, DQC1; also known as the Deterministic quantum computation with one clean qubit is the class of decision problems solvable by a one clean qubit machine in polynomial time, upon measuring the first qubit, with an error probability of at most 1/poly(n) for all instances.
Feynman's algorithm is an algorithm that is used to simulate the operations of a quantum computer on a classical computer. It is based on the Path integral formulation of quantum mechanics, which was formulated by Richard Feynman.
In quantum computing, the variational quantum eigensolver (VQE) is a quantum algorithm for quantum chemistry, quantum simulations and optimization problems. It is a hybrid algorithm that uses both classical computers and quantum computers to find the ground state of a given physical system. Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on the variational method of quantum mechanics.
Quantum random circuits (QRC) is a concept of incorporating an element of randomness into the local unitary operations and measurements of a quantum circuit. The idea is similar to that of random matrix theory which is to use the QRC to obtain almost exact results of non-integrable, hard-to-solve problems by averaging over an ensemble of outcomes. This incorporation of randomness into the circuits has many possible advantages, some of which are (i) the validation of quantum computers, which is the method that Google used when they claimed quantum supremacy in 2019., and (ii) understanding the universal structure of non-equilibrium and thermalization processes in quantum many-body dynamics.
Quantum computational chemistry is an emerging field that exploits quantum computing to simulate chemical systems. Despite quantum mechanics' foundational role in understanding chemical behaviors, traditional computational approaches face significant challenges, largely due to the complexity and computational intensity of quantum mechanical equations. This complexity arises from the exponential growth of a quantum system's wave function with each added particle, making exact simulations on classical computers inefficient.
Quantum singular value transformation is a framework for designing quantum algorithms. It encompasses a variety of quantum algorithms for problems which can be solved with linear algebra, including Hamiltonian simulation, search problems, and linear system solving. It was introduced in 2018 by András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe, generalizing algorithms for Hamiltonian simulation of Guang Hao Low and Isaac Chuang inspired by signal processing.