Eric Bach

Last updated
Eric Bach
BornNovember,
Alma mater University of California - Berkeley
University of Michigan
Scientific career
Fields Computer Science
Institutions University of Wisconsin - Madison
Doctoral advisor Manuel Blum
Doctoral students John Watrous
Victor Shoup

Eric Bach is an American computer scientist who has made contributions to computational number theory.

Bach completed his undergraduate studies at the University of Michigan, Ann Arbor, and got his Ph.D. in computer science from the University of California, Berkeley, in 1984 under the supervision of Manuel Blum. [1] He is currently a professor at the Computer Science Department, University of Wisconsin–Madison.

Among other work, he gave explicit bounds for the Chebotarev density theorem, which imply that if one assumes the generalized Riemann hypothesis then is generated by its elements smaller than 2(log n)2. [2] This result shows that the generalized Riemann hypothesis implies tight bounds for the necessary run-time of the deterministic version of the Miller–Rabin primality test. Bach also did some of the first work on pinning down the actual expected run-time of the Pollard rho method where previous work relied on heuristic estimates and empirical data. [3] He is the namesake of Bach's algorithm for generating random factored numbers.

Related Research Articles

In number theory, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.

<span class="mw-page-title-main">Prime number</span> Number divisible only by 1 or itself

A prime number is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, 1 × 5 or 5 × 1, involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order.

The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global L-functions, which are formally similar to the Riemann zeta-function. One can then ask the same question about the zeros of these L-functions, yielding various generalizations of the Riemann hypothesis. Many mathematicians believe these generalizations of the Riemann hypothesis to be true. The only cases of these conjectures which have been proven occur in the algebraic function field case.

A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy. Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests.

The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test.

In machine learning, early stopping is a form of regularization used to avoid overfitting when training a learner with an iterative method, such as gradient descent. Such methods update the learner so as to make it better fit the training data with each iteration. Up to a point, this improves the learner's performance on data outside of the training set. Past that point, however, improving the learner's fit to the training data comes at the expense of increased generalization error. Early stopping rules provide guidance as to how many iterations can be run before the learner begins to over-fit. Early stopping rules have been employed in many different machine learning methods, with varying amounts of theoretical foundation.

<span class="mw-page-title-main">Analytic number theory</span> Exploring properties of the integers with complex analysis

In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dirichlet L-functions to give the first proof of Dirichlet's theorem on arithmetic progressions. It is well known for its results on prime numbers and additive number theory.

<span class="mw-page-title-main">Michael O. Rabin</span> Israeli mathematician and computer scientist

Michael Oser Rabin is an Israeli mathematician, computer scientist, and recipient of the Turing Award.

<span class="mw-page-title-main">Prime-counting function</span> Function representing the number of primes less than or equal to a given number

In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number x. It is denoted by π(x) (unrelated to the number π).

The AKS primality test is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P". The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite and this without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis. In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.

<span class="mw-page-title-main">Mertens function</span> Summatory function of the Möbius function

In number theory, the Mertens function is defined for all positive integers n as

In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry. Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program.

For supervised learning applications in machine learning and statistical learning theory, generalization error is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data. Because learning algorithms are evaluated on finite samples, the evaluation of a learning algorithm may be sensitive to sampling error. As a result, measurements of prediction error on the current data may not provide much information about predictive ability on new data. Generalization error can be minimized by avoiding overfitting in the learning algorithm. The performance of a machine learning algorithm is visualized by plots that show values of estimates of the generalization error through the learning process, which are called learning curves.

In mathematics and computer science, a primality certificate or primality proof is a succinct, formal proof that a number is prime. Primality certificates allow the primality of a number to be rapidly checked without having to run an expensive or unreliable primality test. "Succinct" usually means that the proof should be at most polynomially larger than the number of digits in the number itself.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.

<span class="mw-page-title-main">Gary Miller (computer scientist)</span> American computer scientist

Gary Lee Miller is a professor of Computer Science at Carnegie Mellon University, Pittsburgh, United States. In 2003 he won the ACM Paris Kanellakis Award for the Miller–Rabin primality test. He was made an ACM Fellow in 2002 and won the Knuth Prize in 2013.

<span class="mw-page-title-main">Riemann hypothesis</span> Conjecture on zeros of the zeta function

In mathematics, the Riemann hypothesis is the conjecture that the Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part 1/2. Many consider it to be the most important unsolved problem in pure mathematics. It is of great interest in number theory because it implies results about the distribution of prime numbers. It was proposed by Bernhard Riemann (1859), after whom it is named.

The Hurwitz quaternion order is a specific order in a quaternion algebra over a suitable number field. The order is of particular importance in Riemann surface theory, in connection with surfaces with maximal symmetry, namely the Hurwitz surfaces. The Hurwitz quaternion order was studied in 1967 by Goro Shimura, but first explicitly described by Noam Elkies in 1998. For an alternative use of the term, see Hurwitz quaternion.

References

  1. "Eric Bach". ACM SIGACT Theoretical Computer Science genealogy database. Archived from the original on November 27, 2005. Retrieved 2008-06-04.
  2. Bach, Eric (1990), "Explicit bounds for primality testing and related problems", Mathematics of Computation , 55 (191): 355–380, doi: 10.2307/2008811 , JSTOR   2008811
  3. Bach, Eric (1991). "Toward a theory of Pollard's rho method" (PDF). Information and Computation . 90 (2): 139–155. doi: 10.1016/0890-5401(91)90001-i . Retrieved March 4, 2015.