In cryptography, the Rabin signature algorithm is a method of digital signature originally proposed by Michael O. Rabin in 1978. [1] [2] [3]
The Rabin signature algorithm was one of the first digital signature schemes proposed. By introducing the use of hashing as an essential step in signing, it was the first design to meet what is now the modern standard of security against forgery, existential unforgeability under chosen-message attack, assuming suitably scaled parameters.
Rabin signatures resemble RSA signatures with exponent , but this leads to qualitative differences that enable more efficient implementation [4] and a security guarantee relative to the difficulty of integer factorization, [2] [3] [5] which has not been proven for RSA. However, Rabin signatures have seen relatively little use or standardization outside IEEE P1363 [6] in comparison to RSA signature schemes such as RSASSA-PKCS1-v1_5 and RSASSA-PSS.
The Rabin signature scheme is parametrized by a randomized hash function of a message and -bit randomization string .
Security against any adversary defined generically in terms of a hash function (i.e., security in the random oracle model) follows from the difficulty of factoring : Any such adversary with high probability of success at forgery can, with nearly as high probability, find two distinct square roots and of a random integer modulo . If then is a nontrivial factor of , since so but . [3] Formalizing the security in modern terms requires filling in some additional details, such as the codomain of ; if we set a standard size for the prime factors, , then we might specify . [5]
Randomization of the hash function was introduced to allow the signer to find a quadratic residue, but randomized hashing for signatures later became relevant in its own right for tighter security theorems [3] and resilience to collision attacks on fixed hash functions. [7] [8] [9]
The quantity in the public key adds no security, since any algorithm to solve congruences for given and can be trivially used as a subroutine in an algorithm to compute square roots modulo and vice versa, so implementations can safely set for simplicity; was discarded altogether in treatments after the initial proposal. [10] [3] [6] [4] After removing , the equations for and in the signing algorithm become:
The Rabin signature scheme was later tweaked by Williams in 1980 [10] to choose and , and replace a square root by a tweaked square root , with and , so that a signature instead satisfies which allows the signer to create a signature in a single trial without sacrificing security. This variant is known as Rabin–Williams. [4] [6]
Further variants allow tradeoffs between signature size and verification speed, partial message recovery, signature compression (down to one-half size), and public key compression (down to one-third size), still without sacrificing security. [4]
Variants without the hash function have been published in textbooks, [11] [12] crediting Rabin for exponent 2 but not for the use of a hash function. These variants are trivially broken—for example, the signature can be forged by anyone as a valid signature on the message if the signature verification equation is instead of .
In the original paper, [2] the hash function was written with the notation , with C for compression, and using juxtaposition to denote concatenation of and as bit strings:
By convention, when wishing to sign a given message, , [the signer] adds as suffix a word of an agreed upon length . The choice of is randomized each time a message is to be signed. The signer now compresses by a hashing function to a word , so that as a binary number …
This notation has led to some confusion among some authors later who ignored the part and misunderstood to mean multiplication, giving the misapprehension of a trivially broken signature scheme. [13]
RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem, one of the oldest widely used for secure data transmission. The initialism "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ), the British signals intelligence agency, by the English mathematician Clifford Cocks. That system was declassified in 1997.
The Digital Signature Algorithm (DSA) is a public-key cryptosystem and Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. In a public-key cryptosystem, two keys are generated: data can only be encrypted with the public key and encrypted data can only be decrypted with the private key. DSA is a variant of the Schnorr and ElGamal signature schemes.
In cryptography a blind signature, as introduced by David Chaum, is a form of digital signature in which the content of a message is disguised (blinded) before it is signed. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. Blind signatures are typically employed in privacy-related protocols where the signer and message author are different parties. Examples include cryptographic election systems and digital cash schemes.
In cryptography, the Elliptic Curve Digital Signature Algorithm (ECDSA) offers a variant of the Digital Signature Algorithm (DSA) which uses elliptic-curve cryptography.
The Rabin cryptosystem is a family of public-key encryption schemes based on a trapdoor function whose security, like that of RSA, is related to the difficulty of integer factorization.
The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing n-th residue classes is believed to be computationally difficult. The decisional composite residuosity assumption is the intractability hypothesis upon which this cryptosystem is based.
KCDSA is a digital signature algorithm created by a team led by the Korea Internet & Security Agency (KISA). It is an ElGamal variant, similar to the Digital Signature Algorithm and GOST R 34.10-94. The standard algorithm is implemented over , but an elliptic curve variant (EC-KCDSA) is also specified.
In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p.
Poly1305 is a universal hash family designed by Daniel J. Bernstein for use in cryptography.
The ElGamal signature scheme is a digital signature scheme which is based on the difficulty of computing discrete logarithms. It was described by Taher Elgamal in 1985.
The Blum–Goldwasser (BG) cryptosystem is an asymmetric key encryption algorithm proposed by Manuel Blum and Shafi Goldwasser in 1984. Blum–Goldwasser is a probabilistic, semantically secure cryptosystem with a constant-size ciphertext expansion. The encryption algorithm implements an XOR-based stream cipher using the Blum-Blum-Shub (BBS) pseudo-random number generator to generate the keystream. Decryption is accomplished by manipulating the final state of the BBS generator using the private key, in order to find the initial seed and reconstruct the keystream.
The Tonelli–Shanks algorithm is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n, where p is a prime: that is, to find a square root of n modulo p.
In mathematics and computing, universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property. This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known, and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
The Okamoto–Uchiyama cryptosystem is a public key cryptosystem proposed in 1998 by Tatsuaki Okamoto and Shigenori Uchiyama. The system works in the multiplicative group of integers modulo n, , where n is of the form p2q and p and q are large primes.
In cryptography, a key encapsulation mechanism, or KEM, is a public-key cryptosystem that allows a sender to generate a short secret key and transmit it to a receiver securely, in spite of eavesdropping and intercepting adversaries.
In cryptography, Very Smooth Hash (VSH) is a provably secure cryptographic hash function invented in 2005 by Scott Contini, Arjen Lenstra, and Ron Steinfeld. Provably secure means that finding collisions is as difficult as some known hard mathematical problem. Unlike other provably secure collision-resistant hashes, VSH is efficient and usable in practice. Asymptotically, it only requires a single multiplication per log(n) message-bits and uses RSA-type arithmetic. Therefore, VSH can be useful in embedded environments where code space is limited.
In mathematics, elliptic curve primality testing techniques, or elliptic curve primality proving (ECPP), are among the quickest and most widely used methods in primality proving. It is an idea put forward by Shafi Goldwasser and Joe Kilian in 1986 and turned into an algorithm by A. O. L. Atkin the same year. The algorithm was altered and improved by several collaborators subsequently, and notably by Atkin and François Morain, in 1993. The concept of using elliptic curves in factorization had been developed by H. W. Lenstra in 1985, and the implications for its use in primality testing followed quickly.
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
Badger is a Message Authentication Code (MAC) based on the idea of universal hashing and was developed by Boesgaard, Scavenius, Pedersen, Christensen, and Zenner. It is constructed by strengthening the ∆-universal hash family MMH using an ϵ-almost strongly universal (ASU) hash function family after the application of ENH, where the value of ϵ is . Since Badger is a MAC function based on the universal hash function approach, the conditions needed for the security of Badger are the same as those for other universal hash functions such as UMAC.
Coppersmith's attack describes a class of cryptographic attacks on the public-key cryptosystem RSA based on the Coppersmith method. Particular applications of the Coppersmith method for attacking RSA include cases when the public exponent e is small or when partial knowledge of a prime factor of the secret key is available.