Quantum optimization algorithms

Last updated

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. [1] Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

Contents

Quantum data fitting

Data fitting is a process of constructing a mathematical function that best fits a set of data points. The fit's quality is measured by some criteria, usually the distance between the function and the data points.

Quantum least squares fitting

One of the most common types of data fitting is solving the least squares problem, minimizing the sum of the squares of differences between the data points and the fitted function.

The algorithm is given input data points and continuous functions . The algorithm finds and gives as output a continuous function that is a linear combination of :

In other words, the algorithm finds the complex coefficients , and thus the vector .

The algorithm is aimed at minimizing the error, which is given by:

where is defined to be the following matrix:

The quantum least-squares fitting algorithm [2] makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for linear systems of equations (HHL), and outputs the coefficients and the fit quality estimation . It consists of three subroutines: an algorithm for performing a pseudo-inverse operation, one routine for the fit quality estimation, and an algorithm for learning the fit parameters.

Because the quantum algorithm is mainly based on the HHL algorithm, it suggests an exponential improvement [3] in the case where is sparse and the condition number (namely, the ratio between the largest and the smallest eigenvalues) of both and is small.

Quantum semidefinite programming

Semidefinite programming (SDP) is an optimization subfield dealing with the optimization of a linear objective function (a user-specified function to be minimized or maximized), over the intersection of the cone of positive semidefinite matrices with an affine space. The objective function is an inner product of a matrix (given as an input) with the variable . Denote by the space of all symmetric matrices. The variable must lie in the (closed convex) cone of positive semidefinite symmetric matrices . The inner product of two matrices is defined as:

The problem may have additional constraints (given as inputs), also usually formulated as inner products. Each constraint forces the inner product of the matrices (given as an input) with the optimization variable to be smaller than a specified value (given as an input). Finally, the SDP problem can be written as:

The best classical algorithm is not known to unconditionally run in polynomial time. The corresponding feasibility problem is known to either lie outside of the union of the complexity classes NP and co-NP, or in the intersection of NP and co-NP. [4]

The quantum algorithm

The algorithm inputs are and parameters regarding the solution's trace, precision and optimal value (the objective function's value at the optimal point).

The quantum algorithm [5] consists of several iterations. In each iteration, it solves a feasibility problem, namely, finds any solution satisfying the following conditions (giving a threshold ):

In each iteration, a different threshold is chosen, and the algorithm outputs either a solution such that (and the other constraints are satisfied, too) or an indication that no such solution exists. The algorithm performs a binary search to find the minimal threshold for which a solution still exists: this gives the minimal solution to the SDP problem.

The quantum algorithm provides a quadratic improvement over the best classical algorithm in the general case, and an exponential improvement when the input matrices are of low rank.

Quantum combinatorial optimization

The combinatorial optimization problem is aimed at finding an optimal object from a finite set of objects. The problem can be phrased as a maximization of an objective function which is a sum of boolean functions. Each boolean function gets as input the -bit string and gives as output one bit (0 or 1). The combinatorial optimization problem of bits and clauses is finding an -bit string that maximizes the function

Approximate optimization is a way of finding an approximate solution to an optimization problem, which is often NP-hard. The approximated solution of the combinatorial optimization problem is a string that is close to maximizing .

Quantum approximate optimization algorithm

For combinatorial optimization, the quantum approximate optimization algorithm (QAOA) [6] briefly had a better approximation ratio than any known polynomial time classical algorithm (for a certain problem), [7] until a more effective classical algorithm was proposed. [8] The relative speed-up of the quantum algorithm is an open research question.

The heart of QAOA relies on the use of unitary operators dependent on angles, where is an input integer. These operators are iteratively applied on a state that is an equal-weighted quantum superposition of all the possible states in the computational basis. In each iteration, the state is measured in the computational basis and is estimated. The angles are then updated classically to increase . After this procedure is repeated a sufficient number of times, the value of is almost optimal, and the state being measured is close to being optimal as well. In principle the optimal value of can be reached up to arbitrary precision, this is guaranteed by the adiabatic theorem [9] or alternatively by the universality of the QAOA unitaries. [10] However, it is an open question whether this can be done in a feasible way. For example, it was shown that QAOA exhibits a strong dependence on the ratio of a problem's constraint to variables (problem density) placing a limiting restriction on the algorithm's capacity to minimize a corresponding objective function. [11]

It was soon recognized that a generalization of the QAOA process is essentially an alternating application of a continuous-time quantum walk on an underlying graph followed by a quality-dependent phase shift applied to each solution state. This generalized QAOA was termed as QWOA (Quantum Walk-based Optimisation Algorithm). [12]

In the paper How many qubits are needed for quantum computational supremacy submitted to arXiv, [13] the authors conclude that a QAOA circuit with 420 qubits and 500 constraints would require at least one century to be simulated using a classical simulation algorithm running on state-of-the-art supercomputers so that would be sufficient for quantum computational supremacy.

A rigorous comparison of QAOA with classical algorithms can give estimates on depth and number of qubits required for quantum advantage. A study of QAOA and MaxCut algorithm shows that is required for scalable advantage. [14]

See also

Related Research Articles

Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Early versions of MTL were called "hints".

<span class="mw-page-title-main">Interior-point method</span> Algorithms for solving convex optimization problems

Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.

<span class="mw-page-title-main">Regularization (mathematics)</span> Technique to make a model more generalizable and transferable

In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

In conformal field theory and representation theory, a W-algebra is an associative algebra that generalizes the Virasoro algebra. W-algebras were introduced by Alexander Zamolodchikov, and the name "W-algebra" comes from the fact that Zamolodchikov used the letter W for one of the elements of one of his examples.

The quantum Heisenberg model, developed by Werner Heisenberg, is a statistical mechanical model used in the study of critical points and phase transitions of magnetic systems, in which the spins of the magnetic systems are treated quantum mechanically. It is related to the prototypical Ising model, where at each site of a lattice, a spin represents a microscopic magnetic dipole to which the magnetic moment is either up or down. Except the coupling between magnetic dipole moments, there is also a multipolar version of Heisenberg model called the multipolar exchange interaction.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.

In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form

Given a Hilbert space with a tensor product structure a product numerical range is defined as a numerical range with respect to the subset of product vectors. In some situations, especially in the context of quantum mechanics product numerical range is known as local numerical range

Proximal gradientmethods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization of the form

The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.

<span class="mw-page-title-main">Sparse dictionary learning</span> Representation learning method

Sparse dictionary learning is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.

<span class="mw-page-title-main">Batch normalization</span> Method used to make artificial neural networks faster and stable by re-centering and re-scaling

Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

In physics, the Gaudin model, sometimes known as the quantum Gaudin model, is a model, or a large class of models, in statistical mechanics first described in its simplest case by Michel Gaudin. They are exactly solvable models, and are also examples of quantum spin chains.

References

  1. Moll, Nikolaj; Barkoutsos, Panagiotis; Bishop, Lev S.; Chow, Jerry M.; Cross, Andrew; Egger, Daniel J.; Filipp, Stefan; Fuhrer, Andreas; Gambetta, Jay M.; Ganzhorn, Marc; Kandala, Abhinav; Mezzacapo, Antonio; Müller, Peter; Riess, Walter; Salis, Gian; Smolin, John; Tavernelli, Ivano; Temme, Kristan (2018). "Quantum optimization using variational algorithms on near-term quantum devices". Quantum Science and Technology. 3 (3): 030503. arXiv: 1710.01022 . Bibcode:2018QS&T....3c0503M. doi:10.1088/2058-9565/aab822. S2CID   56376912.
  2. Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2 August 2012). "Quantum Algorithm for Data Fitting". Physical Review Letters. 109 (5): 050505. arXiv: 1204.5242 . Bibcode:2012PhRvL.109e0505W. doi:10.1103/PhysRevLett.109.050505. PMID   23006156. S2CID   118439810.
  3. Montanaro, Ashley (12 January 2016). "Quantum algorithms: an overview". npj Quantum Information . 2: 15023. arXiv: 1511.04206 . Bibcode:2016npjQI...215023M. doi:10.1038/npjqi.2015.23. S2CID   2992738.
  4. Ramana, Motakuri V. (1997). "An exact duality theory for semidefinite programming and its complexity implications". Mathematical Programming. 77: 129–162. doi:10.1007/BF02614433. S2CID   12886462.
  5. Brandao, Fernando G. S. L.; Svore, Krysta (2016). "Quantum Speed-ups for Semidefinite Programming". arXiv: 1609.05537 [quant-ph].
  6. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam (2014). "A Quantum Approximate Optimization Algorithm". arXiv: 1411.4028 [quant-ph].
  7. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam (2014). "A Quantum Approximate Optimization Algorithm Applied to a Bounded Occurrence Constraint Problem". arXiv: 1412.6062 [quant-ph].
  8. Barak, Boaz; Moitra, Ankur; O'Donnell, Ryan; Raghavendra, Prasad; Regev, Oded; Steurer, David; Trevisan, Luca; Vijayaraghavan, Aravindan; Witmer, David; Wright, John (2015). "Beating the random assignment on constraint satisfaction problems of bounded degree". arXiv: 1505.03424 [cs.CC].
  9. Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam (2014). "A Quantum Approximate Optimization Algorithm". arXiv: 1411.4028 [quant-ph].
  10. Morales, M. E.; Biamonte, J. D.; Zimborás, Z. (2019-09-20). "On the universality of the quantum approximate optimization algorithm". Quantum Information Processing. 19: 291. arXiv: 1909.03123 . doi:10.1007/s11128-020-02748-9.
  11. Akshay, V.; Philathong, H.; Morales, M. E. S.; Biamonte, J. D. (2020-03-05). "Reachability Deficits in Quantum Approximate Optimization". Physical Review Letters. 124 (9): 090504. arXiv: 1906.11259 . Bibcode:2020PhRvL.124i0504A. doi:10.1103/PhysRevLett.124.090504. PMID   32202873. S2CID   195699685.
  12. Marsh, S.; Wang, J. B. (2020-06-08). "Combinatorial optimization via highly efficient quantum walks". Physical Review Research. 2 (2): 023302. arXiv: 1912.07353 . Bibcode:2020PhRvR...2b3302M. doi:10.1103/PhysRevResearch.2.023302. S2CID   216080740.
  13. Dalzell, Alexander M.; Harrow, Aram W.; Koh, Dax Enshan; La Placa, Rolando L. (2020-05-11). "How many qubits are needed for quantum computational supremacy?". Quantum. 4: 264. arXiv: 1805.05224 . doi: 10.22331/q-2020-05-11-264 . ISSN   2521-327X.
  14. Lykov, Danylo; Wurtz, Jonathan; Poole, Cody; Saffman, Mark; Noel, Tom; Alexeev, Yuri (2022). "Sampling Frequency Thresholds for Quantum Advantage of Quantum Approximate Optimization Algorithm". arXiv: 2206.03579 [quant-ph].