In mathematics, the sieve of Sundaram is a variant of the sieve of Eratosthenes, a simple deterministic algorithm for finding all the prime numbers up to a specified integer. It was discovered by Indian student S. P. Sundaram in 1934. [1] [2]
The sieve starts with a list of the integers from 1 to n. From this list, all numbers of the form i + j + 2ij are removed, where i and j are positive integers such that 1 ≤i≤j and i + j + 2ij≤n. The remaining numbers are doubled and incremented by one, giving a list of the odd prime numbers (that is, all primes except 2) below 2n + 2.
The sieve of Sundaram sieves out the composite numbers just as the sieve of Eratosthenes does, but even numbers are not considered; the work of "crossing out" the multiples of 2 is done by the final double-and-increment step. Whenever Eratosthenes' method would cross out k different multiples of a prime 2i + 1, Sundaram's method crosses out i + j(2i + 1) for 1 ≤j≤⌊k/2⌋.
If we start with integers from 1 to n, the final list contains only odd integers from 3 to 2n + 1. From this final list, some odd integers have been excluded; we must show these are precisely the composite odd integers less than 2n + 2.
Let q be an odd integer of the form 2k + 1. Then, q is excluded if and only if k is of the form i + j + 2ij, that is q = 2(i + j + 2ij) + 1. Then q = (2i + 1)(2j + 1).
So, an odd integer is excluded from the final list if and only if it has a factorization of the form (2i + 1)(2j + 1) — which is to say, if it has a non-trivial odd factor. Therefore the list must be composed of exactly the set of odd prime numbers less than or equal to 2n + 2.
defsieve_of_Sundaram(n):"""The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer."""k=(n-2)//2integers_list=[True]*(k+1)foriinrange(1,k+1):j=iwhilei+j+2*i*j<=k:integers_list[i+j+2*i*j]=Falsej+=1ifn>2:print(2,end=' ')foriinrange(1,k+1):ifintegers_list[i]:print(2*i+1,end=' ')
The above obscure-but-commonly-implemented Python version of the Sieve of Sundaram hides the true complexity of the algorithm due to the following reasons:
The following Python code in the same style resolves the above three issues, as well converting the code to a prime-counting function that also displays the total number of composite-culling operations:
frommathimportisqrtdefsieve_of_Sundaram(n):"""The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer."""ifn<3:ifn<2:return0else:return1k=(n-3)//2+1integers_list=[Trueforiinrange(k)]ops=0foriinrange((isqrt(n)-3)//2+1):# if integers_list[i]: # adding this condition turns it into a SoE!p=2*i+3s=(p*p-3)//2# compute cull startforjinrange(s,k,p):integers_list[j]=Falseops+=1print("Total operations: ",ops,";",sep='')count=1foriinrange(k):ifintegers_list[i]:count+=1print("Found ",count," primes to ",n,".",sep='')
The commented-out line is all that is necessary to convert the Sieve of Sundaram to the Odds-Only Sieve of Eratosthenes; this clarifies that the only difference between these two algorithms is that the Sieve of Sundaram culls composite numbers using all odd numbers as the base values, whereas the Odds-Only Sieve of Eratosthenes uses only the odd primes as base values, with both ranges of base values bounded to the square root of the range.
When run for various ranges, it is immediately clear that while, of course, the resulting count of primes for a given range is identical between the two algorithms, the number of culling operations is much higher for the Sieve of Sundaram and also grows much more quickly with increasing range.
From the above implementation, it is clear that the amount of work done is given by
where n is the range to be sieved and the interval [a, b] is the odd numbers between 3 and √n. (The interval [a, b] actually starts at the square of the odd base values, but this difference is negligible for large ranges.)
As the integral of the reciprocal of x is exactly log(x), and as the lower value for a is relatively very small (close to 1, whose logarithm is 0), this is about
Ignoring the constant factor of 1/8, the asymptotic complexity is clearly O(n log(n)).
In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.
In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form Mn = 2n − 1 for some integer n. They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17th century. If n is a composite number then so is 2n − 1. Therefore, an equivalent definition of the Mersenne primes is that they are the prime numbers of the form Mp = 2p − 1 for some prime p.
A prime number is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, 1 × 5 or 5 × 1, involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order.
In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit.
A composite number is a positive integer that can be formed by multiplying two smaller positive integers. Equivalently, it is a positive integer that has at least one divisor other than 1 and itself. Every positive integer is composite, prime, or the unit 1, so the composite numbers are exactly the numbers that are not prime and not a unit.
A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy. Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests.
The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test.
In number theory, an integer q is called a quadratic residue modulo n if it is congruent to a perfect square modulo n; i.e., if there exists an integer x such that:
In number theory, a lucky number is a natural number in a set which is generated by a certain "sieve". This sieve is similar to the sieve of Eratosthenes that generates the primes, but it eliminates numbers based on their position in the remaining set, instead of their value.
In mathematics, a Riesel number is an odd natural number k for which is composite for all natural numbers n. In other words, when k is a Riesel number, all members of the following set are composite:
The AKS primality test is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P". The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis. In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.
In number theory, a branch of mathematics, the special number field sieve (SNFS) is a special-purpose integer factorization algorithm. The general number field sieve (GNFS) was derived from it.
In number theory, an n-smooth (or n-friable) number is an integer whose prime factors are all less than or equal to n. For example, a 7-smooth number is a number in which every prime factor is at most 7. Therefore, 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. 2-smooth numbers are simply the powers of 2, while 5-smooth numbers are also known as regular numbers.
In mathematics, the sieve of Atkin is a modern algorithm for finding all prime numbers up to a specified integer. Compared with the ancient sieve of Eratosthenes, which marks off multiples of primes, the sieve of Atkin does some preliminary work and then marks off multiples of squares of primes, thus achieving a better theoretical asymptotic complexity. It was created in 2003 by A. O. L. Atkin and Daniel J. Bernstein.
In computational number theory, a variety of algorithms make it possible to generate prime numbers efficiently. These are used in various applications, for example hashing, public-key cryptography, and search of prime factors in large numbers.
In number theory, a polite number is a positive integer that can be written as the sum of two or more consecutive positive integers. A positive integer which is not polite is called impolite. The impolite numbers are exactly the powers of two, and the polite numbers are the natural numbers that are not powers of two.
Wheel factorization is a method for generating a sequence of natural numbers by repeated additions, as determined by a number of the first few primes, so that the generated numbers are coprime with these primes, by construction.
In mathematics, the Lucas–Lehmer–Riesel test is a primality test for numbers of the form N = k ⋅ 2n − 1 with odd k < 2n. The test was developed by Hans Riesel and it is based on the Lucas–Lehmer primality test. It is the fastest deterministic algorithm known for numbers of that form. For numbers of the form N = k ⋅ 2n + 1, either application of Proth's theorem or one of the deterministic proofs described in Brillhart–Lehmer–Selfridge 1975 are used.
In mathematics, the sieve of Pritchard is an algorithm for finding all prime numbers up to a specified bound. Like the ancient sieve of Eratosthenes, it has a simple conceptual basis in number theory. It is especially suited to quick hand computation for small bounds.