Square-root sum problem

Last updated
Unsolved problem in computer science:

What is the Turing run-time complexity of the square-root sum problem?

Contents

The square-root sum problem (SRS) is a computational decision problem from the field of numerical analysis, with applications to computational geometry.

Definitions

SRS is defined as follows: [1]

Given positive integers and an integer t, decide whether .

An alternative definition is:

Given positive integers and , decide whether .

The problem was posed in 1981, [2] and likely earlier.

Run-time complexity

SRS can be solved in polynomial time in the Real RAM model. [3] However, its run-time complexity in the Turing machine model is open, as of 1997. [1] The main difficulty is that, in order to solve the problem, the square-roots should be computed to a high accuracy, which may require a large number of bits. The problem is mentioned in the Open Problems Garden. [4]

Blomer [5] presents a polynomial-time Monte Carlo algorithm for deciding whether a sum of square roots equals 0.

Allender, Burgisser, Pedersen and Miltersen [6] prove that SRS lies in the counting hierarchy (which is contained in PSPACE).

Separation bounds

One way to solve SRS is to prove a lower bound on the absolute difference or . Such lower bound is called a "separation bound" since it separates between the difference and 0. For example, if the absolute difference is at least 2-d, it means that we can round all numbers to d bits of accuracy, and solve SRS in time polynomial in d.

This leads to the mathematical problem of proving bounds on this difference. Define r(n,k) as the smallest positive value of the difference , where ai and bi are integers between 1 and n; define R(n,k) is defined as -log r(n,k), which is the number of accuracy digits required to solve SRS. Computing r(n,k) is open problem 33 in the open problem project. [7]

In particular, it is interesting whether r(n,k) is in O(poly(k,log(n)). A positive answer would imply that SRS can be solved in polynomial time in the Turing Machine model. Some currently known bounds are:

Applications

SRS is important in computational geometry, as Euclidean distances are given by square-roots, and many geometric problems (e.g. Minimum spanning tree in the plane and Euclidean traveling salesman problem) require to compute sums of distances.

Etessami and Yannakakis [13] show a reduction from SRS to the problem of termination of recursive concurrent stochastic games.

Relation to semidefinite programming

SRS also has a theoretic importance, as it is a simple special case of a semidefinite programming feasibility problem. Consider the matrix . This matrix is positive semidefinite iff , iff . Therefore, to solve SRS, we can construct a feasibility problem with n constraints of the form , and additional linear constraints . The resulting SDP is feasible if and only if SRS is feasible. As the runtime complexity of SRS in the Turing machine model is open, the same is true for SDP feasibility (as of 1997).

Extensions

Kayal and Saha [14] extend the problem from integers to polynomials. Their results imply a solution to SRS for a special class of integers.

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted . For example, the GCD of 8 and 12 is 4, that is, gcd(8, 12) = 4.

In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.

In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2yz + 1.

<span class="mw-page-title-main">Square root</span> Number whose square is a given number

In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .

<span class="mw-page-title-main">Square-free integer</span> Number without repeated prime factors

In mathematics, a square-free integer (or squarefree integer) is an integer which is divisible by no square number other than 1. That is, its prime factorization has exactly one factor for each prime that appears in it. For example, 10 = 2 ⋅ 5 is square-free, but 18 = 2 ⋅ 3 ⋅ 3 is not, because 18 is divisible by 9 = 32. The smallest positive square-free numbers are

In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines. The problem is usually stated as follows: two parties each receive a -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them.

<span class="mw-page-title-main">Factorization</span> (Mathematical) decomposition into a product

In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.

<span class="mw-page-title-main">Root of unity</span> Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

<span class="mw-page-title-main">Mertens function</span> Summatory function of the Möbius function

In number theory, the Mertens function is defined for all positive integers n as

In number theory, Dixon's factorization method is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.

<span class="mw-page-title-main">Interior-point method</span> Algorithms for solving convex optimization problems

Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.

In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a multiset of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.

In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial ?"

In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by Jack E. Graver. Their connection to the theory of Gröbner bases was discussed by Bernd Sturmfels. The algorithmic theory of Graver bases and its application to integer programming is described by Shmuel Onn.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial at consists of computing See also Polynomial ring § Polynomial evaluation

References

  1. 1 2 Goemans, Michel X. (1997-10-01). "Semidefinite programming in combinatorial optimization". Mathematical Programming. 79 (1): 143–161. doi:10.1007/BF02614315. ISSN   1436-4646. S2CID   17221714.
  2. O’Rourke, Joseph (1981). "Advanced problem 6369". Amer. Math. Monthly. 88 (10): 769.
  3. Tiwari, Prasoon (1992-12-01). "A problem that is easier to solve on the unit-cost algebraic RAM". Journal of Complexity. 8 (4): 393–397. doi:10.1016/0885-064X(92)90003-T. ISSN   0885-064X.
  4. "Complexity of square-root sum | Open Problem Garden". garden.irmacs.sfu.ca. Retrieved 2024-01-01.
  5. "CSDL | IEEE Computer Society". www.computer.org. Retrieved 2024-01-01.
  6. Allender, Eric; Bürgisser, Peter; Kjeldgaard-Pedersen, Johan; Miltersen, Peter Bro (January 2009). "On the Complexity of Numerical Analysis". SIAM Journal on Computing. 38 (5): 1987–2006. doi:10.1137/070697926. ISSN   0097-5397.
  7. Demaine, Erik D.; Mitchell, Joseph; O'Rourke, Joseph. "TOPP: Problem 33: Sum of Square Roots". topp.openproblem.net. Retrieved 2024-01-01.
  8. Qian, Jianbo; Wang, Cao An (2006-12-16). "How much precision is needed to compare two sums of square roots of integers?". Information Processing Letters. 100 (5): 194–198. doi:10.1016/j.ipl.2006.05.002. ISSN   0020-0190.
  9. Burnikel, C.; Fleischer, R.; Mehlhorn, K.; Schirra, S. (2000-05-01). "A Strong and Easily Computable Separation Bound for Arithmetic Expressions Involving Radicals". Algorithmica. 27 (1): 87–99. doi:10.1007/s004530010005. ISSN   1432-0541. S2CID   34502818.
  10. Cheng, Qi; Meng, Xianmeng; Sun, Celi; Chen, Jiazhe (April 2010). "Bounding the sum of square roots via lattice reduction". Mathematics of Computation. 79 (270): 1109–1122. arXiv: 0905.4487 . Bibcode:2010MaCom..79.1109C. doi: 10.1090/S0025-5718-09-02304-7 . ISSN   0025-5718.
  11. Cheng, Qi; Li, Yu-Hsin (2011-09-09). "On the minimum gap between sums of square roots of small integers". Theoretical Computer Science. 412 (39): 5458–5465. doi: 10.1016/j.tcs.2011.06.014 . ISSN   0304-3975.
  12. Eisenbrand, Friedrich; Haeberle, Matthieu; Singer, Neta (2023). "An improved bound on sums of square roots via the subspace theorem". arXiv: 2312.02057 [cs.CG].
  13. Etessami, Kousha; Yannakakis, Mihalis (2008-11-11). "Recursive Concurrent Stochastic Games". Logical Methods in Computer Science. 4 (4). arXiv: 0810.3581 . doi: 10.2168/LMCS-4(4:7)2008 . ISSN   1860-5974.
  14. Kayal, Neeraj; Saha, Chandan (2012-11-01). "On the Sum of Square Roots of Polynomials and Related Problems". ACM Transactions on Computation Theory. 4 (4): 9:1–9:15. doi:10.1145/2382559.2382560. ISSN   1942-3454. S2CID   7225729.

[1]

  1. O'Rourke, Joseph (1981). "Advanced problem 6369". Amer. Math. Monthly. 88 (10): 769.