Hilbert's seventeenth problem

Last updated

Hilbert's seventeenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It concerns the expression of positive definite rational functions as sums of quotients of squares. The original question may be reformulated as:

Contents

Hilbert's question can be restricted to homogeneous polynomials of even degree, since a polynomial of odd degree changes sign, and the homogenization of a polynomial takes only nonnegative values if and only if the same is true for the polynomial.

Motivation

The formulation of the question takes into account that there are non-negative polynomials, for example [1]

which cannot be represented as a sum of squares of other polynomials. In 1888, Hilbert showed that every non-negative homogeneous polynomial in n variables and degree 2d can be represented as sum of squares of other polynomials if and only if either (a) n = 2 or (b) 2d = 2 or (c) n = 3 and 2d = 4. [2] Hilbert's proof did not exhibit any explicit counterexample: only in 1967 the first explicit counterexample was constructed by Motzkin. [3] Furthermore, if the polynomial has a degree 2d greater than two, there are significantly many more non-negative polynomials that cannot be expressed as sums of squares. [4]

The following table summarizes in which cases every non-negative homogeneous polynomial (or a polynomial of even degree) can be represented as a sum of squares:

Any homogeneous polynomial of degree 2d and n variables can be represented as sum of squares?2d (Degree)Any polynomial of degree 2d and n variables can be represented as sum of squares?2d (Degree)
24≥624≥6
n (Number of variables)1YesYesYesn (Number of variables)1YesYesYes
2YesYesYes2YesYesNo
3YesYesNo3YesNoNo
≥4YesNoNo≥4YesNoNo

Solution and generalizations

The particular case of n = 2 was already solved by Hilbert in 1893. [5] The general problem was solved in the affirmative, in 1927, by Emil Artin, [6] for positive semidefinite functions over the reals or more generally real-closed fields. An algorithmic solution was found by Charles Delzell in 1984. [7] A result of Albrecht Pfister [8] shows that a positive semidefinite form in n variables can be expressed as a sum of 2n squares. [9]

Dubois showed in 1967 that the answer is negative in general for ordered fields. [10] In this case one can say that a positive polynomial is a sum of weighted squares of rational functions with positive coefficients. [11] McKenna showed in 1975 that all positive semidefinite polynomials with coefficients in an ordered field are sums of weighted squares of rational functions with positive coefficients only if the field is dense in its real closure in the sense that any interval with endpoints in the real closure contains elements from the original field. [12]

A generalization to the matrix case (matrices with polynomial function entries that are always positive semidefinite can be expressed as sum of squares of symmetric matrices with rational function entries) was given by Gondard, Ribenboim [13] and Procesi, Schacher, [14] with an elementary proof given by Hillar and Nie. [15]


Minimum number of square rational terms

It is an open question what is the smallest number

such that any n-variate, non-negative polynomial of degree d can be written as sum of at most square rational functions over the reals.

The best known result (as of 2008) is

Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle v(n,d)\leq2^n, }

due to Pfister in 1967. [8]

On the other hand, an n-variable instance of 3-SAT can be realized as a positivity problem on a polynomial with n variables and d=4. This proves that positivity testing is NP-Hard. More precisely, assuming the exponential time hypothesis to be true, .

In complex analysis the Hermitian analogue, requiring the squares to be squared norms of holomorphic mappings, is somewhat more complicated, but true for positive polynomials by a result of Quillen. [16] The result of Pfister on the other hand fails in the Hermitian case, that is there is no bound on the number of squares required, see D'Angelo–Lebl. [17]

See also

Notes

  1. Marie-Françoise Roy. The role of Hilbert's problems in real algebraic geometry. Proceedings of the ninth EWM Meeting, Loccum, Germany 1999
  2. Hilbert, David (September 1888). "Ueber die Darstellung definiter Formen als Summe von Formenquadraten". Mathematische Annalen. 32 (3): 342–350. doi:10.1007/bf01443605. S2CID   177804714.
  3. Motzkin, T. S. (1967). "The arithmetic-geometric inequality". In Shisha, Oved (ed.). Inequalities. Academic Press. pp. 205–224.
  4. Blekherman, Grigoriy (2006). "There are significantly more nonegative polynomials than sums of squares". Israel Journal of Mathematics . 153 (1): 355–380. doi: 10.1007/BF02771790 . ISSN   0021-2172.
  5. Hilbert, David (December 1893). "Über ternäre definite Formen". Acta Mathematica. 17 (1): 169–197. doi: 10.1007/bf02391990 .
  6. Artin, Emil (1927). "Über die Zerlegung definiter Funktionen in Quadrate". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 5 (1): 100–115. doi:10.1007/BF02952513. S2CID   122607428.
  7. Delzell, C.N. (1984). "A continuous, constructive solution to Hilbert's 17th problem". Inventiones Mathematicae . 76 (3): 365–384. Bibcode:1984InMat..76..365D. doi:10.1007/BF01388465. S2CID   120884276. Zbl   0547.12017.
  8. 1 2 Pfister, Albrecht (1967). "Zur Darstellung definiter Funktionen als Summe von Quadraten". Inventiones Mathematicae (in German). 4 (4): 229–237. Bibcode:1967InMat...4..229P. doi:10.1007/bf01425382. S2CID   122180608. Zbl   0222.10022.
  9. Lam (2005) p.391
  10. Dubois, D.W. (1967). "Note on Artin's solution of Hilbert's 17th problem". Bull. Am. Math. Soc. 73 (4): 540–541. doi: 10.1090/s0002-9904-1967-11736-1 . Zbl   0164.04502.
  11. Lorenz (2008) p.16
  12. McKenna, K. (1975). New facts about Hilbert's seventeenth problem. Model Theory and Algebra, Lecture Notes in Mathematics. Vol. 498. Springer, Berlin, Heidelberg. pp. 220–230.
  13. Gondard, Danielle; Ribenboim, Paulo (1974). "Le 17e problème de Hilbert pour les matrices". Bull. Sci. Math. (2). 98 (1): 49–56. MR   0432613. Zbl   0298.12104.
  14. Procesi, Claudio; Schacher, Murray (1976). "A non-commutative real Nullstellensatz and Hilbert's 17th problem". Ann. of Math. 2. 104 (3): 395–406. doi:10.2307/1970962. JSTOR   1970962. MR   0432612. Zbl   0347.16010.
  15. Hillar, Christopher J.; Nie, Jiawang (2008). "An elementary and constructive solution to Hilbert's 17th problem for matrices". Proc. Am. Math. Soc. 136 (1): 73–76. arXiv: math/0610388 . doi:10.1090/s0002-9939-07-09068-5. S2CID   119639574. Zbl   1126.12001.
  16. Quillen, Daniel G. (1968). "On the representation of hermitian forms as sums of squares". Invent. Math. 5 (4): 237–242. Bibcode:1968InMat...5..237Q. doi:10.1007/bf01389773. S2CID   119774934. Zbl   0198.35205.
  17. D'Angelo, John P.; Lebl, Jiri (2012). "Pfister's theorem fails in the Hermitian case". Proc. Am. Math. Soc. 140 (4): 1151–1157. arXiv: 1010.3215 . doi:10.1090/s0002-9939-2011-10841-4. S2CID   92993604. Zbl   1309.12001.

Related Research Articles

In mathematics, an ordered field is a field together with a total ordering of its elements that is compatible with the field operations. Basic examples of ordered fields are the rational numbers and the real numbers, both with their standard orderings.

In mathematics, a transcendental number is a real or complex number that is not algebraic – that is, not the root of a non-zero polynomial of finite degree with rational coefficients. The best-known transcendental numbers are π and e.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is the symmetric matrix that defines the quadratic form, and is any invertible matrix such that is diagonal, then the number of negative elements in the diagonal of is always the same, for all such ; and the same goes for the number of positive elements.

Transcendental number theory is a branch of number theory that investigates transcendental numbers, in both qualitative and quantitative ways.

<span class="mw-page-title-main">Moment problem</span> Trying to map moments to a measure that generates them

In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure to the sequence of moments

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if

In mathematics, real algebraic geometry is the sub-branch of algebraic geometry studying real algebraic sets, i.e. real-number solutions to algebraic equations with real-number coefficients, and mappings between them.

In mathematics, a polynomial matrix or matrix of polynomials is a matrix whose elements are univariate or multivariate polynomials. Equivalently, a polynomial matrix is a polynomial whose coefficients are matrices.

In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is

In mathematics, a form (i.e. a homogeneous polynomial) h(x) of degree 2m in the real n-dimensional vector x is sum of squares of forms (SOS) if and only if there exist forms of degree m such that

In mathematics, a definite quadratic form is a quadratic form over some real vector space V that has the same sign for every non-zero vector of V. According to that sign, the quadratic form is called positive-definite or negative-definite.

In mathematics, a positive polynomial (respectively non-negative polynomial) on a particular set is a polynomial whose values are positive (respectively non-negative) on that set. Precisely, Let p be a polynomial in n variables with real coefficients and let S be a subset of the n-dimensional Euclidean space ℝn. We say that:

A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain polynomials, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in semidefinite programming.

References