Pollard's rho algorithm for logarithms

Last updated

Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem.

Contents

The goal is to compute such that , where belongs to a cyclic group generated by . The algorithm computes integers , , , and such that . If the underlying group is cyclic of order , by substituting as and noting that two powers are equal if and only if the exponents are equivalent modulo the order of the base, in this case modulo , we get that is one of the solutions of the equation . Solutions to this equation are easily obtained using the extended Euclidean algorithm.

To find the needed , , , and the algorithm uses Floyd's cycle-finding algorithm to find a cycle in the sequence , where the function is assumed to be random-looking and thus is likely to enter into a loop of approximate length after steps. One way to define such a function is to use the following rules: Partition into three disjoint subsets , , and of approximately equal size using a hash function. If is in then double both and ; if then increment , if then increment .

Algorithm

Let be a cyclic group of order , and given , and a partition , let be the map

and define maps and by

input:a: a generator of Gb: an element of Goutput: An integer x such that ax = b, or failure  Initialise i 0, a0 0, b0 0, x0 1 Gloopii + 1      xif(xi−1),     aig(xi−1, ai−1),     bih(xi−1, bi−1)      x2i−1f(x2i−2),     a2i−1g(x2i−2, a2i−2),     b2i−1h(x2i−2, b2i−2)     x2if(x2i−1),     a2ig(x2i−1, a2i−1),     b2ih(x2i−1, b2i−1) whilexix2irbib2iif r = 0 return failurereturnr−1(a2iai) mod n

Example

Consider, for example, the group generated by 2 modulo (the order of the group is , 2 generates the group of units modulo 1019). The algorithm is implemented by the following C++ program:

#include<stdio.h>constintn=1018,N=n+1;/* N = 1019 -- prime     */constintalpha=2;/* generator             */constintbeta=5;/* 2^{10} = 1024 = 5 (N) */voidnew_xab(int&x,int&a,int&b){switch(x%3){case0:x=x*x%N;a=a*2%n;b=b*2%n;break;case1:x=x*alpha%N;a=(a+1)%n;break;case2:x=x*beta%N;b=(b+1)%n;break;}}intmain(void){intx=1,a=0,b=0;intX=x,A=a,B=b;for(inti=1;i<n;++i){new_xab(x,a,b);new_xab(X,A,B);new_xab(X,A,B);printf("%3d  %4d %3d %3d  %4d %3d %3d\n",i,x,a,b,X,A,B);if(x==X)break;}return0;}

The results are as follows (edited):

 i     x   a   b     X   A   B ------------------------------  1     2   1   0    10   1   1  2    10   1   1   100   2   2  3    20   2   1  1000   3   3  4   100   2   2   425   8   6  5   200   3   2   436  16  14  6  1000   3   3   284  17  15  7   981   4   3   986  17  17  8   425   8   6   194  17  19 .............................. 48   224 680 376    86 299 412 49   101 680 377   860 300 413 50   505 680 378   101 300 415 51  1010 681 378  1010 301 416

That is and so , for which is a solution as expected. As is not prime, there is another solution , for which holds.

Complexity

The running time is approximately . If used together with the Pohlig–Hellman algorithm, the running time of the combined algorithm is , where is the largest prime factor of .

Related Research Articles

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter and a scale parameter .
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.
<span class="mw-page-title-main">Beta function</span> Mathematical function

In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral

<span class="mw-page-title-main">Lagrange's four-square theorem</span> Every natural number can be represented as the sum of four integer squares

Lagrange's four-square theorem, also known as Bachet's conjecture, states that every natural number can be represented as a sum of four non-negative integer squares. That is, the squares form an additive basis of order four.

The Cayley–Purser algorithm was a public-key cryptography algorithm published in early 1999 by 16-year-old Irishwoman Sarah Flannery, based on an unpublished work by Michael Purser, founder of Baltimore Technologies, a Dublin data security company. Flannery named it for mathematician Arthur Cayley. It has since been found to be flawed as a public-key algorithm, but was the subject of considerable media attention.

In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p.

In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary.

<span class="mw-page-title-main">Beta prime distribution</span> Probability distribution

In probability theory and statistics, the beta prime distribution is an absolutely continuous probability distribution. If has a beta distribution, then the odds has a beta prime distribution.

In natural language processing, latent Dirichlet allocation (LDA) is a Bayesian network that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an example of a Bayesian topic model. In this, observations are collected into documents, and each word's presence is attributable to one of the document's topics. Each document will contain a small number of topics.

Continuous wavelets of compact support alpha can be built, which are related to the beta distribution. The process is derived from probability distributions using blur derivative. These new wavelets have just one cycle, so they are termed unicycle wavelets. They can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two parameters and . Closed-form expressions for beta wavelets and scale functions as well as their spectra are derived. Their importance is due to the Central Limit Theorem by Gnedenko and Kolmogorov applied for compactly supported signals.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

In mathematics, the Selberg integral is a generalization of Euler beta function to n dimensions introduced by Atle Selberg.

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

<span class="mw-page-title-main">Multivariate stable distribution</span>

The multivariate stable distribution is a multivariate probability distribution that is a multivariate generalisation of the univariate stable distribution. The multivariate stable distribution defines linear relations between stable distribution marginals. In the same way as for the univariate case, the distribution is defined in terms of its characteristic function.

In mathematics, the Kodaira–Spencer map, introduced by Kunihiko Kodaira and Donald C. Spencer, is a map associated to a deformation of a scheme or complex manifold X, taking a tangent space of a point of the deformation space to the first cohomology group of the sheaf of vector fields on X.

In applied mathematics and mathematical analysis, the fractal derivative or Hausdorff derivative is a non-Newtonian generalization of the derivative dealing with the measurement of fractals, defined in fractal geometry. Fractal derivatives were created for the study of anomalous diffusion, by which traditional approaches fail to factor in the fractal nature of the media. A fractal measure t is scaled according to tα. Such a derivative is local, in contrast to the similarly applied fractional derivative. Fractal calculus is formulated as a generalized of standard calculus

The Einstein–Hilbert action for general relativity was first formulated purely in terms of the space-time metric. To take the metric and affine connection as independent variables in the action principle was first considered by Palatini. It is called a first order formulation as the variables to vary over involve only up to first derivatives in the action and so doesn't overcomplicate the Euler–Lagrange equations with higher derivative terms. The tetradic Palatini action is another first-order formulation of the Einstein–Hilbert action in terms of a different pair of independent variables, known as frame fields and the spin connection. The use of frame fields and spin connections are essential in the formulation of a generally covariant fermionic action which couples fermions to gravity when added to the tetradic Palatini action.

The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields. This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results from the circular convolution results. When applied to a DFT over , this algorithm has a very low multiplicative complexity. In practice, since there usually exist efficient algorithms for circular convolutions with specific lengths, this algorithm is very efficient.

<span class="mw-page-title-main">Dual graviton</span> Hypothetical particle found in supergravity

In theoretical physics, the dual graviton is a hypothetical elementary particle that is a dual of the graviton under electric-magnetic duality, as an S-duality, predicted by some formulations of supergravity in eleven dimensions.

This article summarizes several identities in exterior calculus.

References