Quantum optimization algorithms

Last updated

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. [1] Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

Contents

Quantum data fitting

Data fitting is a process of constructing a mathematical function that best fits a set of data points. The fit's quality is measured by some criteria, usually the distance between the function and the data points.

Quantum least squares fitting

One of the most common types of data fitting is solving the least squares problem, minimizing the sum of the squares of differences between the data points and the fitted function.

The algorithm is given input data points and continuous functions . The algorithm finds and gives as output a continuous function that is a linear combination of :

In other words, the algorithm finds the complex coefficients , and thus the vector .

The algorithm is aimed at minimizing the error, which is given by:

where is defined to be the following matrix:

The quantum least-squares fitting algorithm [2] makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for linear systems of equations (HHL), and outputs the coefficients and the fit quality estimation . It consists of three subroutines: an algorithm for performing a pseudo-inverse operation, one routine for the fit quality estimation, and an algorithm for learning the fit parameters.

Because the quantum algorithm is mainly based on the HHL algorithm, it suggests an exponential improvement [3] in the case where is sparse and the condition number (namely, the ratio between the largest and the smallest eigenvalues) of both and is small.

Quantum semidefinite programming

Semidefinite programming (SDP) is an optimization subfield dealing with the optimization of a linear objective function (a user-specified function to be minimized or maximized), over the intersection of the cone of positive semidefinite matrices with an affine space. The objective function is an inner product of a matrix (given as an input) with the variable . Denote by the space of all symmetric matrices. The variable must lie in the (closed convex) cone of positive semidefinite symmetric matrices . The inner product of two matrices is defined as:

The problem may have additional constraints (given as inputs), also usually formulated as inner products. Each constraint forces the inner product of the matrices (given as an input) with the optimization variable to be smaller than a specified value (given as an input). Finally, the SDP problem can be written as:

The best classical algorithm is not known to unconditionally run in polynomial time. The corresponding feasibility problem is known to either lie outside of the union of the complexity classes NP and co-NP, or in the intersection of NP and co-NP. [4]

The quantum algorithm

The algorithm inputs are and parameters regarding the solution's trace, precision and optimal value (the objective function's value at the optimal point).

The quantum algorithm [5] consists of several iterations. In each iteration, it solves a feasibility problem, namely, finds any solution satisfying the following conditions (giving a threshold ):

In each iteration, a different threshold is chosen, and the algorithm outputs either a solution such that (and the other constraints are satisfied, too) or an indication that no such solution exists. The algorithm performs a binary search to find the minimal threshold for which a solution still exists: this gives the minimal solution to the SDP problem.

The quantum algorithm provides a quadratic improvement over the best classical algorithm in the general case, and an exponential improvement when the input matrices are of low rank.

Quantum combinatorial optimization

The combinatorial optimization problem is aimed at finding an optimal object from a finite set of objects. The problem can be phrased as a maximization of an objective function which is a sum of boolean functions. Each boolean function gets as input the -bit string and gives as output one bit (0 or 1). The combinatorial optimization problem of bits and clauses is finding an -bit string that maximizes the function

Approximate optimization is a way of finding an approximate solution to an optimization problem, which is often NP-hard. The approximated solution of the combinatorial optimization problem is a string that is close to maximizing .

Quantum approximate optimization algorithm

For combinatorial optimization, the quantum approximate optimization algorithm (QAOA) [6] briefly had a better approximation ratio than any known polynomial time classical algorithm (for a certain problem), [7] until a more effective classical algorithm was proposed. [8] The relative speed-up of the quantum algorithm is an open research question.

QAOA consists of the following steps:

  1. Defining a cost Hamiltonian such that its ground state encodes the solution to the optimization problem.
  2. Defining a mixer Hamiltonian.
  3. Defining the oracles and , with parameters and α.
  4. Repeated application of the oracles and , in the order:
  5. Preparing an initial state, that is a superposition of all possible states and apply to the state.
  6. Using classical methods to optimize the parameters and measure the output state of the optimized circuit to obtain the approximate optimal solution to the cost Hamiltonian. The optimal solution will be the one that maximises the expectation value of the cost Hamiltonian .
Sample QAOA ansatz for a three qubit circuit QAOAcircuit.png
Sample QAOA ansatz for a three qubit circuit

The layout of the algorithm, viz, the use of cost and mixer Hamiltonians are inspired from the Quantum Adiabatic theorem, which states that starting in the ground state of a time-dependent Hamiltonian, if the Hamiltonian evolves slowly enough, the final state will be the ground state of the final Hamiltonian. Moreover, the adiabatic theorem can be generalized to any other eigenstate as long as there is no overlap (degeneracy) between different eigenstates across the evolution. Identifying the initial Hamiltonian with and the final Hamiltonian with , whose ground state encodes the solution to the optimization problem of interest, one can approximate the optimization problem as the adiabatic evolution of the Hamiltonian from an initial to the final one, whose ground (eigen) state gives the optimal solution. In general, QAOA relies on the use of unitary operators dependent on angles (parameters), where is an input integer, which can be identified the number of layers of the oracle . These operators are iteratively applied on a state that is an equal-weighted quantum superposition of all the possible states in the computational basis. In each iteration, the state is measured in the computational basis and the boolean function is estimated. The angles are then updated classically to increase . After this procedure is repeated a sufficient number of times, the value of is almost optimal, and the state being measured is close to being optimal as well. A sample circuit that implements QAOA on a quantum computer is given in figure. This procedure is highlighted using the following example of finding the minimum vertex cover of a graph [9] .

QAOA for finding the minimum vertex cover of a graph

The goal here is to find the minimum vertex cover of a graph: a collection of vertices such that each edge in the graph contains at least one of the vertices in the cover. Hence, these vertices “cover” all the edges. We wish to find the vertex cover that has the smallest possible number of vertices. Vertex covers can be represented by a bit string where each bit denotes whether the corresponding vertex is present in the cover. For example, the bit string 0101 represents a cover consisting of the second and fourth vertex in a graph with four vertices.

Sample graph to illustrate the minimum vertex cover problem. GraphforQAOA.png
Sample graph to illustrate the minimum vertex cover problem.

Consider the graph given in the figure. It has four vertices and there are two minimum vertex cover for this graph: vertices 0 and 2, and the vertices 1 and 2. These can be respectively represented by the bit strings 1010 and 0110. The goal of the algorithm is to sample these bit strings with high probability. In this case, the cost Hamiltonian has two ground states, |1010⟩ and |0110⟩, coinciding with the solutions of the problem. The mixer Hamiltonian is the simple, non-commuting sum of Pauli-X operations on each node of the graph and they are given by:

Output of QAOA implementation in Qiskit for minimum vertex cover problem. Note that the bit string |1010> is fliped as |0101> as Qiskit uses reverse ordering of bits. QAOAoutput.png
Output of QAOA implementation in Qiskit for minimum vertex cover problem. Note that the bit string |1010> is fliped as |0101> as Qiskit uses reverse ordering of bits.
Qiskit implementation of QAOA for minimum vertex cover problem. QAOAcircuitforGraph.png
Qiskit implementation of QAOA for minimum vertex cover problem.

Implementing QAOA algorithm for this four qubit circuit with two layers of the ansatz in qiskit (see figure) and optimizing leads to a probability distribution for the states given in the figure. This shows that the states |0110⟩ and |1010⟩ have the highest probabilities of being measured, just as expected.

Generalisation of QAOA to constrained combinatorial optimisation

In principle the optimal value of can be reached up to arbitrary precision, this is guaranteed by the adiabatic theorem [10] or alternatively by the universality of the QAOA unitaries. [11] However, it is an open question whether this can be done in a feasible way. For example, it was shown that QAOA exhibits a strong dependence on the ratio of a problem's constraint to variables (problem density) placing a limiting restriction on the algorithm's capacity to minimize a corresponding objective function. [12]

It was soon recognized that a generalization of the QAOA process is essentially an alternating application of a continuous-time quantum walk on an underlying graph followed by a quality-dependent phase shift applied to each solution state. This generalized QAOA was termed as QWOA (Quantum Walk-based Optimisation Algorithm). [13]

In the paper How many qubits are needed for quantum computational supremacy submitted to arXiv, [14] the authors conclude that a QAOA circuit with 420 qubits and 500 constraints would require at least one century to be simulated using a classical simulation algorithm running on state-of-the-art supercomputers so that would be sufficient for quantum computational supremacy.

A rigorous comparison of QAOA with classical algorithms can give estimates on depth and number of qubits required for quantum advantage. A study of QAOA and MaxCut algorithm shows that is required for scalable advantage. [15]

Variations of QAOA

Several variations to the basic structure of QAOA have been proposed [16] , which include variations to the ansatz of the basic algorithm. The choice of ansatz typically depends on the problem type, such as combinatorial problems represented as graphs, or problems strongly influenced by hardware design. However, ansatz design must balance specificity and generality to avoid overfitting and maintain applicability to a wide range of problems. For this reason, designing optimal ansatze for QAOA is an extensively researched and widely investigated topic. Some of the proposed variants are:

  1. Mullti-angle QAOA [17]
  2. QAOA+ [18]
  3. Digitised counteradiabatic QAOA [19]
  4. Quantum alternating operator ansatz [20] ,which allows for constrains on the optimization problem etc.

Another variation of QAOA focuses on techniques for parameter optimization, which aims at selecting the optimal set of initial parameters for a given problem and avoiding barren plateaus, which represent parameters leading to eigenstates which correspond to plateaus in the energy landscape of the cost Hamiltonian.

Finally, there has been significant research interest in leveraging specific hardware to enhance the performance of QAOA across various platforms, such as trapped ion, neutral atoms, superconducting qubits, and photonic quantum computers. The goals of these approaches include overcoming hardware connectivity limitations and mitigating noise-related issues to broaden the applicability of QAOA to a wide range of combinatorial optimization problems.

See also

Related Research Articles

In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

In physics, the Bethe ansatz is an ansatz for finding the exact wavefunctions of certain quantum many-body models, most commonly for one-dimensional lattice models. It was first used by Hans Bethe in 1931 to find the exact eigenvalues and eigenvectors of the one-dimensional antiferromagnetic isotropic (XXX) Heisenberg model.

The quantum Heisenberg model, developed by Werner Heisenberg, is a statistical mechanical model used in the study of critical points and phase transitions of magnetic systems, in which the spins of the magnetic systems are treated quantum mechanically. It is related to the prototypical Ising model, where at each site of a lattice, a spin represents a microscopic magnetic dipole to which the magnetic moment is either up or down. Except the coupling between magnetic dipole moments, there is also a multipolar version of Heisenberg model called the multipolar exchange interaction.

The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.

Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to perform calculations and is closely related to quantum annealing.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the set of languages for which, when a string is in the language, there is a polynomial-size quantum proof that convinces a polynomial time quantum verifier of this fact with high probability. Moreover, when the string is not in the language, every polynomial-size quantum state is rejected by the verifier with high probability.

The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

αΒΒ is a second-order deterministic global optimization algorithm for finding the optima of general, twice continuously differentiable functions. The algorithm is based around creating a relaxation for nonlinear functions of general form by superposing them with a quadratic of sufficient magnitude, called α, such that the resulting superposition is enough to overcome the worst-case scenario of non-convexity of the original function. Since a quadratic has a diagonal Hessian matrix, this superposition essentially adds a number to all diagonal elements of the original Hessian, such that the resulting Hessian is positive-semidefinite. Thus, the resulting relaxation is a convex function.

The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as

Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. The method is typically used to study QFTs on spacetimes of the form , specifically to compute the spectrum of the Hamiltonian along . A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff is introduced, akin to the lattice spacing a in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking.

In quantum computing, the variational quantum eigensolver (VQE) is a quantum algorithm for quantum chemistry, quantum simulations and optimization problems. It is a hybrid algorithm that uses both classical computers and quantum computers to find the ground state of a given physical system. Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on the variational method of quantum mechanics.

In physics, the Gaudin model, sometimes known as the quantum Gaudin model, is a model, or a large class of models, in statistical mechanics first described in its simplest case by Michel Gaudin. They are exactly solvable models, and are also examples of quantum spin chains.

References

  1. Moll, Nikolaj; Barkoutsos, Panagiotis; Bishop, Lev S.; Chow, Jerry M.; Cross, Andrew; Egger, Daniel J.; Filipp, Stefan; Fuhrer, Andreas; Gambetta, Jay M.; Ganzhorn, Marc; Kandala, Abhinav; Mezzacapo, Antonio; Müller, Peter; Riess, Walter; Salis, Gian; Smolin, John; Tavernelli, Ivano; Temme, Kristan (2018). "Quantum optimization using variational algorithms on near-term quantum devices". Quantum Science and Technology. 3 (3): 030503. arXiv: 1710.01022 . Bibcode:2018QS&T....3c0503M. doi:10.1088/2058-9565/aab822. S2CID   56376912.
  2. Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2 August 2012). "Quantum Algorithm for Data Fitting". Physical Review Letters. 109 (5): 050505. arXiv: 1204.5242 . Bibcode:2012PhRvL.109e0505W. doi:10.1103/PhysRevLett.109.050505. PMID   23006156. S2CID   118439810.
  3. Montanaro, Ashley (12 January 2016). "Quantum algorithms: an overview". npj Quantum Information . 2: 15023. arXiv: 1511.04206 . Bibcode:2016npjQI...215023M. doi:10.1038/npjqi.2015.23. S2CID   2992738.
  4. Ramana, Motakuri V. (1997). "An exact duality theory for semidefinite programming and its complexity implications". Mathematical Programming. 77: 129–162. doi:10.1007/BF02614433. S2CID   12886462.
  5. Brandao, Fernando G. S. L.; Svore, Krysta (2016). "Quantum Speed-ups for Semidefinite Programming". arXiv: 1609.05537 [quant-ph].
  6. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam (2014). "A Quantum Approximate Optimization Algorithm". arXiv: 1411.4028 [quant-ph].
  7. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam (2014). "A Quantum Approximate Optimization Algorithm Applied to a Bounded Occurrence Constraint Problem". arXiv: 1412.6062 [quant-ph].
  8. Barak, Boaz; Moitra, Ankur; O'Donnell, Ryan; Raghavendra, Prasad; Regev, Oded; Steurer, David; Trevisan, Luca; Vijayaraghavan, Aravindan; Witmer, David; Wright, John (2015). "Beating the random assignment on constraint satisfaction problems of bounded degree". arXiv: 1505.03424 [cs.CC].
  9. Ceroni, Jack (2020-11-18). "Intro to QAOA". PennyLane Demos.
  10. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam (2014). "A Quantum Approximate Optimization Algorithm". arXiv: 1411.4028 [quant-ph].
  11. Morales, M. E.; Biamonte, J. D.; Zimborás, Z. (2019-09-20). "On the universality of the quantum approximate optimization algorithm". Quantum Information Processing. 19 (9): 291. arXiv: 1909.03123 . doi:10.1007/s11128-020-02748-9.
  12. Akshay, V.; Philathong, H.; Morales, M. E. S.; Biamonte, J. D. (2020-03-05). "Reachability Deficits in Quantum Approximate Optimization". Physical Review Letters. 124 (9): 090504. arXiv: 1906.11259 . Bibcode:2020PhRvL.124i0504A. doi:10.1103/PhysRevLett.124.090504. PMID   32202873. S2CID   195699685.
  13. Marsh, S.; Wang, J. B. (2020-06-08). "Combinatorial optimization via highly efficient quantum walks". Physical Review Research. 2 (2): 023302. arXiv: 1912.07353 . Bibcode:2020PhRvR...2b3302M. doi:10.1103/PhysRevResearch.2.023302. S2CID   216080740.
  14. Dalzell, Alexander M.; Harrow, Aram W.; Koh, Dax Enshan; La Placa, Rolando L. (2020-05-11). "How many qubits are needed for quantum computational supremacy?". Quantum. 4: 264. arXiv: 1805.05224 . Bibcode:2020Quant...4..264D. doi: 10.22331/q-2020-05-11-264 . ISSN   2521-327X.
  15. Lykov, Danylo; Wurtz, Jonathan; Poole, Cody; Saffman, Mark; Noel, Tom; Alexeev, Yuri (2023). "Sampling frequency thresholds for the quantum advantage of the quantum approximate optimization algorithm". npj Quantum Information. 9: 73. arXiv: 2206.03579 . Bibcode:2023npjQI...9...73L. doi:10.1038/s41534-023-00718-4.
  16. Blekos, Kostas; Brand, Dean; Ceschini, Andrea; Chou, Chiao-Hui; Li, Rui-Hao; Pandya, Komal; Summer, Alessandro (June 2024). "A Review on Quantum Approximate Optimization Algorithm and its Variants". Physics Reports. 1068: 1–66. arXiv: 2306.09198 . Bibcode:2024PhR..1068....1B. doi:10.1016/j.physrep.2024.03.002.
  17. Herrman, Rebekah; Lotshaw, Phillip C.; Ostrowski, James; Humble, Travis S.; Siopsis, George (2022-04-26). "Multi-angle quantum approximate optimization algorithm". Scientific Reports. 12 (1): 6781. arXiv: 2109.11455 . Bibcode:2022NatSR..12.6781H. doi:10.1038/s41598-022-10555-8. ISSN   2045-2322. PMC   9043219 . PMID   35474081.
  18. Chalupnik, Michelle; Melo, Hans; Alexeev, Yuri; Galda, Alexey (September 2022). "Augmenting QAOA Ansatz with Multiparameter Problem-Independent Layer". 2022 IEEE International Conference on Quantum Computing and Engineering (QCE). IEEE. pp. 97–103. arXiv: 2205.01192 . doi:10.1109/QCE53715.2022.00028. ISBN   978-1-6654-9113-6.
  19. Chandarana, P.; Hegade, N. N.; Paul, K.; Albarrán-Arriagada, F.; Solano, E.; del Campo, A.; Chen, Xi (2022-02-22). "Digitized-counterdiabatic quantum approximate optimization algorithm". Physical Review Research. 4 (1): 013141. arXiv: 2107.02789 . Bibcode:2022PhRvR...4a3141C. doi:10.1103/PhysRevResearch.4.013141. ISSN   2643-1564.
  20. Hadfield, Stuart; Wang, Zhihui; O'Gorman, Bryan; Rieffel, Eleanor; Venturelli, Davide; Biswas, Rupak (2019-02-12). "From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz". Algorithms. 12 (2): 34. doi: 10.3390/a12020034 . ISSN   1999-4893.