Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem.
The goal is to compute such that , where belongs to a cyclic group generated by . The algorithm computes integers , , , and such that . If the underlying group is cyclic of order , by substituting as and noting that two powers are equal if and only if the exponents are equivalent modulo the order of the base, in this case modulo , we get that is one of the solutions of the equation . Solutions to this equation are easily obtained using the extended Euclidean algorithm.
To find the needed , , , and the algorithm uses Floyd's cycle-finding algorithm to find a cycle in the sequence , where the function is assumed to be random-looking and thus is likely to enter into a loop of approximate length after steps. One way to define such a function is to use the following rules: Partition into three disjoint subsets , , and of approximately equal size using a hash function. If is in then double both and ; if then increment , if then increment .
Let be a cyclic group of order , and given , and a partition , let be the map
and define maps and by
input:a: a generator of Gb: an element of Goutput: An integer x such that ax = b, or failure Initialise i← 0, a0← 0, b0← 0, x0← 1 ∈Gloopi←i + 1 xi←f(xi−1), ai←g(xi−1, ai−1), bi←h(xi−1, bi−1) x2i−1←f(x2i−2), a2i−1←g(x2i−2, a2i−2), b2i−1←h(x2i−2, b2i−2) x2i←f(x2i−1), a2i←g(x2i−1, a2i−1), b2i←h(x2i−1, b2i−1) whilexi≠x2ir←bi − b2iif r = 0 return failurereturnr−1(a2i − ai) mod n
Consider, for example, the group generated by 2 modulo (the order of the group is , 2 generates the group of units modulo 1019). The algorithm is implemented by the following C++ program:
#include<stdio.h>constintn=1018,N=n+1;/* N = 1019 -- prime */constintalpha=2;/* generator */constintbeta=5;/* 2^{10} = 1024 = 5 (N) */voidnew_xab(int&x,int&a,int&b){switch(x%3){case0:x=x*x%N;a=a*2%n;b=b*2%n;break;case1:x=x*alpha%N;a=(a+1)%n;break;case2:x=x*beta%N;b=(b+1)%n;break;}}intmain(void){intx=1,a=0,b=0;intX=x,A=a,B=b;for(inti=1;i<n;++i){new_xab(x,a,b);new_xab(X,A,B);new_xab(X,A,B);printf("%3d %4d %3d %3d %4d %3d %3d\n",i,x,a,b,X,A,B);if(x==X)break;}return0;}
The results are as follows (edited):
i x a b X A B ------------------------------ 1 2 1 0 10 1 1 2 10 1 1 100 2 2 3 20 2 1 1000 3 3 4 100 2 2 425 8 6 5 200 3 2 436 16 14 6 1000 3 3 284 17 15 7 981 4 3 986 17 17 8 425 8 6 194 17 19 .............................. 48 224 680 376 86 299 412 49 101 680 377 860 300 413 50 505 680 378 101 300 415 51 1010 681 378 1010 301 416
That is and so , for which is a solution as expected. As is not prime, there is another solution , for which holds.
The running time is approximately . If used together with the Pohlig–Hellman algorithm, the running time of the combined algorithm is , where is the largest prime factor of .
In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral
Lagrange's four-square theorem, also known as Bachet's conjecture, states that every natural number can be represented as a sum of four non-negative integer squares. That is, the squares form an additive basis of order four. where the four numbers are integers. For illustration, 3, 31, and 310 in several ways, can be represented as the sum of four squares as follows:
The Cayley–Purser algorithm was a public-key cryptography algorithm published in early 1999 by 16-year-old Irishwoman Sarah Flannery, based on an unpublished work by Michael Purser, founder of Baltimore Technologies, a Dublin data security company. Flannery named it for mathematician Arthur Cayley. It has since been found to be flawed as a public-key algorithm, but was the subject of considerable media attention.
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.
In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p.
In mathematics, the Eisenstein integers, occasionally also known as Eulerian integers, are the complex numbers of the form
In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary.
In probability theory and statistics, the beta prime distribution is an absolutely continuous probability distribution. If has a beta distribution, then the odds has a beta prime distribution.
Continuous wavelets of compact support alpha can be built, which are related to the beta distribution. The process is derived from probability distributions using blur derivative. These new wavelets have just one cycle, so they are termed unicycle wavelets. They can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two parameters and . Closed-form expressions for beta wavelets and scale functions as well as their spectra are derived. Their importance is due to the Central Limit Theorem by Gnedenko and Kolmogorov applied for compactly supported signals.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.
The multivariate stable distribution is a multivariate probability distribution that is a multivariate generalisation of the univariate stable distribution. The multivariate stable distribution defines linear relations between stable distribution marginals. In the same way as for the univariate case, the distribution is defined in terms of its characteristic function.
In mathematics, the Perrin numbers are a doubly infinite constant-recursive integer sequence with characteristic equation x3 = x + 1. The Perrin numbers bear the same relationship to the Padovan sequence as the Lucas numbers do to the Fibonacci sequence.
In mathematics, the Kodaira–Spencer map, introduced by Kunihiko Kodaira and Donald C. Spencer, is a map associated to a deformation of a scheme or complex manifold X, taking a tangent space of a point of the deformation space to the first cohomology group of the sheaf of vector fields on X.
The Einstein–Hilbert action for general relativity was first formulated purely in terms of the space-time metric. To take the metric and affine connection as independent variables in the action principle was first considered by Palatini. It is called a first order formulation as the variables to vary over involve only up to first derivatives in the action and so doesn't overcomplicate the Euler–Lagrange equations with higher derivative terms. The tetradic Palatini action is another first-order formulation of the Einstein–Hilbert action in terms of a different pair of independent variables, known as frame fields and the spin connection. The use of frame fields and spin connections are essential in the formulation of a generally covariant fermionic action which couples fermions to gravity when added to the tetradic Palatini action.
The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields. This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results from the circular convolution results. When applied to a DFT over , this algorithm has a very low multiplicative complexity. In practice, since there usually exist efficient algorithms for circular convolutions with specific lengths, this algorithm is very efficient.
In theoretical physics, the dual graviton is a hypothetical elementary particle that is a dual of the graviton under electric-magnetic duality, as an S-duality, predicted by some formulations of eleven-dimensional supergravity.
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry.