Parity problem (sieve theory)

Last updated

In number theory, the parity problem refers to a limitation in sieve theory that prevents sieves from giving good estimates in many kinds of prime-counting problems. The problem was identified and named by Atle Selberg in 1949. Beginning around 1996, John Friedlander and Henryk Iwaniec developed some parity-sensitive sieves that make the parity problem less of an obstacle.

Contents

Statement

Terence Tao gave this "rough" statement of the problem: [1]

Parity problem. If A is a set whose elements are all products of an odd number of primes (or are all products of an even number of primes), then (without injecting additional ingredients), sieve theory is unable to provide non-trivial lower bounds on the size of A. Also, any upper bounds must be off from the truth by a factor of 2 or more.

This problem is significant because it may explain why it is difficult for sieves to "detect primes," in other words to give a non-trivial lower bound for the number of primes with some property. For example, in a sense Chen's theorem is very close to a solution of the twin prime conjecture, since it says that there are infinitely many primes p such that p + 2 is either prime or the product of two primes (semiprime). The parity problem suggests that, because the case of interest has an odd number of prime factors (namely 1), it won't be possible to separate out the two cases using sieves.

Example

This example is due to Selberg and is given as an exercise with hints by Cojocaru & Murty. [2] :133–134

The problem is to estimate separately the number of numbers ≤ x with no prime divisors ≤ x1/2, that have an even (or an odd) number of prime factors. It can be shown that, no matter what the choice of weights in a Brun- or Selberg-type sieve, the upper bound obtained will be at least (2 + o(1)) x / ln x for both problems. But in fact the set with an even number of factors is empty and so has size 0. The set with an odd number of factors is just the primes between x1/2 and x, so by the prime number theorem its size is (1 + o(1)) x / ln x. Thus these sieve methods are unable to give a useful upper bound for the first set, and overestimate the upper bound on the second set by a factor of 2.

Parity-sensitive sieves

Beginning around 1996 John Friedlander and Henryk Iwaniec developed some new sieve techniques to "break" the parity problem. [3] [4] One of the triumphs of these new methods is the Friedlander–Iwaniec theorem, which states that there are infinitely many primes of the form a2 + b4.

Glyn Harman relates the parity problem to the distinction between Type I and Type II information in a sieve. [5]

Karatsuba phenomenon

In 2007 Anatolii Alexeevitch Karatsuba discovered an imbalance between the numbers in an arithmetic progression with given parities of the number of prime factors. His papers [6] [7] were published after his death.

Let be a set of natural numbers (positive integers) that is, the numbers . The set of primes, that is, such integers , , that have just two distinct divisors (namely, and ), is denoted by , . Every natural number , , can be represented as a product of primes (not necessarily distinct), that is where , and such representation is unique up to the order of factors.

If we form two sets, the first consisting of positive integers having even number of prime factors, the second consisting of positive integers having an odd number of prime factors, in their canonical representation, then the two sets are approximately the same size.

If, however, we limit our two sets to those positive integers whose canonical representation contains no primes in arithmetic progression, for example , or the progression , , , , then of these positive integers, those with an even number of prime factors will tend to be fewer than those with odd number of prime factors. Karatsuba discovered this property. He found also a formula for this phenomenon, a formula for the difference in cardinalities of sets of natural numbers with odd and even amount of prime factors, when these factors are complied with certain restrictions. In all cases, since the sets involved are infinite, by "larger" and "smaller" we mean the limit of the ratio of the sets as an upper bound on the primes goes to infinity. In the case of primes containing an arithmetic progression, Karatsuba proved that this limit is infinite.

We restate the Karatsuba phenomenon using mathematical terminology.

Let and be subsets of , such that , if contains an even number of prime factors, and , if contains an odd number of prime factors. Intuitively, the sizes of the two sets and are approximately the same. More precisely, for all , we define and , where is the cardinality of the set of all numbers from such that , and is the cardinality of the set of all numbers from such that . The asymptotic behavior of and was derived by E. Landau: [8]

This shows that

that is and are asymptotically equal.

Further,

so that the difference between the cardinalities of the two sets is small.

On the other hand, if we let be a natural number, and be a sequence of natural numbers, , such that ; ; every are different modulo ; Let be a set of primes belonging to the progressions ; . ( is the set of all primes not dividing ).

We denote as a set of natural numbers, which do not contain prime factors from , and as a subset of numbers from with even number of prime factors, as a subset of numbers from with odd number of prime factors. We define the functions

Karatsuba proved that for , the asymptotic formula

is valid, where is a positive constant.

He also showed that it is possible to prove the analogous theorems for other sets of natural numbers, for example, for numbers which are representable in the form of the sum of two squares, and that sets of natural numbers, all factors of which do belong to , will display analogous asymptotic behavior.

The Karatsuba theorem was generalized for the case when is a certain unlimited set of primes.

The Karatsuba phenomenon is illustrated by the following example. We consider the natural numbers whose canonical representation does not include primes belonging to the progression , . Then this phenomenon is expressed by the formula:

Notes

  1. Tao, Terence (2007-06-05). "Open question: The parity problem in sieve theory" . Retrieved 2008-08-11.
  2. Cojocaru, Alina Carmen; M. Ram Murty (2005). An introduction to sieve methods and their applications. London Mathematical Society Student Texts. Vol. 66. Cambridge University Press. ISBN   0-521-61275-6.
  3. Friedlander, John; Henryk Iwaniec (1997-02-18). "Using a parity-sensitive sieve to count prime values of a polynomial". Proceedings of the National Academy of Sciences. 94 (4): 1054–1058. Bibcode:1997PNAS...94.1054F. doi: 10.1073/pnas.94.4.1054 . PMC   19742 . PMID   11038598. 1054–1058.
  4. Friedlander, John; Henryk Iwaniec (1998). "Asymptotic sieve for primes". Annals of Mathematics. 148 (3): 1041–1065. arXiv: math/9811186 . Bibcode:1998math.....11186F. doi:10.2307/121035. JSTOR   121035. S2CID   11574656.
  5. Harman, Glyn (2007). Prime-detecting sieves. London Mathematical Society Monographs. Vol. 33. Princeton University Press. pp. 45, 335. ISBN   978-0-691-12437-7. Zbl   1220.11118.
  6. Karatsuba, A. A. (2011). "A property of the set of prime numbers". Russian Mathematical Surveys. 66 (2): 209–220. Bibcode:2011RuMaS..66..209K. doi:10.1070/RM2011v066n02ABEH004739.
  7. Karatsuba, A. A. (2011). "A property of the Set of Primes as a Multiplicative Basis of Natural Numbers". Doklady Mathematics (84:1): 1–4.
  8. Landau, E. (1912). "Über die Anzahl der Gitter punkte in gewissen Bereichen". Gött. Nachricht.: 687–771.

Related Research Articles

In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.

<span class="mw-page-title-main">Square-free integer</span> Number without repeated prime factors

In mathematics, a square-free integer (or squarefree integer) is an integer which is divisible by no square number other than 1. That is, its prime factorization has exactly one factor for each prime that appears in it. For example, 10 = 2 ⋅ 5 is square-free, but 18 = 2 ⋅ 3 ⋅ 3 is not, because 18 is divisible by 9 = 32. The smallest positive square-free numbers are

<span class="mw-page-title-main">Goldbach's conjecture</span> Even integers as sums of two primes

Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even natural number greater than 2 is the sum of two prime numbers.

<span class="mw-page-title-main">Divisor</span> Integer that is a factor of another integer

In mathematics, a divisor of an integer also called a factor of is an integer that may be multiplied by some integer to produce In this case, one also says that is a multiple of An integer is divisible or evenly divisible by another integer if is a divisor of ; this implies dividing by leaves no remainder.

<span class="mw-page-title-main">Parity (mathematics)</span> Property of being an even or odd number

In mathematics, parity is the property of an integer of whether it is even or odd. An integer is even if it is divisible by 2, and odd if it is not. For example, −4, 0, and 82 are even numbers, while −3, 5, 7, and 21 are odd numbers.

The Lenstra elliptic-curve factorization or the elliptic-curve factorization method (ECM) is a fast, sub-exponential running time, algorithm for integer factorization, which employs elliptic curves. For general-purpose factoring, ECM is the third-fastest known factoring method. The second-fastest is the multiple polynomial quadratic sieve, and the fastest is the general number field sieve. The Lenstra elliptic-curve factorization is named after Hendrik Lenstra.

<span class="mw-page-title-main">Analytic number theory</span> Exploring properties of the integers with complex analysis

In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dirichlet L-functions to give the first proof of Dirichlet's theorem on arithmetic progressions. It is well known for its results on prime numbers and additive number theory.

In mathematics, Dickson's lemma states that every set of -tuples of natural numbers has finitely many minimal elements. This simple fact from combinatorics has become attributed to the American algebraist L. E. Dickson, who used it to prove a result in number theory about perfect numbers. However, the lemma was certainly known earlier, for example to Paul Gordan in his research on invariant theory.

The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second-fastest method known. It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve.

In mathematics, Schinzel's hypothesis H is one of the most famous open problems in the topic of number theory. It is a very broad generalization of widely open conjectures such as the twin prime conjecture. The hypothesis is named after Andrzej Schinzel.

In number theory, an n-smooth (or n-friable) number is an integer whose prime factors are all less than or equal to n. For example, a 7-smooth number is a number in which every prime factor is at most 7. Therefore, 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. 2-smooth numbers are simply the powers of 2, while 5-smooth numbers are also known as regular numbers.

Sieve theory is a set of general techniques in number theory, designed to count, or more realistically to estimate the size of, sifted sets of integers. The prototypical example of a sifted set is the set of prime numbers up to some prescribed limit X. Correspondingly, the prototypical example of a sieve is the sieve of Eratosthenes, or the more general Legendre sieve. The direct attack on prime numbers using these methods soon reaches apparently insuperable obstacles, in the way of the accumulation of error terms. In one of the major strands of number theory in the twentieth century, ways were found of avoiding some of the difficulties of a frontal attack with a naive idea of what sieving should be.

In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:

In mathematics, more specifically in the field of analytic number theory, a Landau–Siegel zero or simply Siegel zero, also known as exceptional zero), named after Edmund Landau and Carl Ludwig Siegel, is a type of potential counterexample to the generalized Riemann hypothesis, on the zeros of Dirichlet L-functions associated to quadratic number fields. Roughly speaking, these are possible zeros very near to .

In mathematics, a Kloosterman sum is a particular kind of exponential sum. They are named for the Dutch mathematician Hendrik Kloosterman, who introduced them in 1926 when he adapted the Hardy–Littlewood circle method to tackle a problem involving positive definite diagonal quadratic forms in four variables, strengthening his 1924 dissertation research on five or more variables.

<span class="mw-page-title-main">Landau's problems</span> Four basic unsolved problems about prime numbers

At the 1912 International Congress of Mathematicians, Edmund Landau listed four basic problems about prime numbers. These problems were characterised in his speech as "unattackable at the present state of mathematics" and are now known as Landau's problems. They are as follows:

  1. Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes?
  2. Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime?
  3. Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares?
  4. Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n2 + 1?

In mathematics, the sieve of Sundaram is a variant of the sieve of Eratosthenes, a simple deterministic algorithm for finding all the prime numbers up to a specified integer. It was discovered by Indian student S. P. Sundaram in 1934.

In number theory, the fundamental lemma of sieve theory is any of several results that systematize the process of applying sieve methods to particular problems. Halberstam & Richert write:

A curious feature of sieve literature is that while there is frequent use of Brun's method there are only a few attempts to formulate a general Brun theorem ; as a result there are surprisingly many papers which repeat in considerable detail the steps of Brun's argument.

<span class="mw-page-title-main">Anatoly Karatsuba</span> Russian mathematician (1937–2008)

Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.

In mathematics the Function Field Sieve is one of the most efficient algorithms to solve the Discrete Logarithm Problem (DLP) in a finite field. It has heuristic subexponential complexity. Leonard Adleman developed it in 1994 and then elaborated it together with M. D. Huang in 1999. Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two.