Accumulator (cryptography)

Last updated

In cryptography, an accumulator is a one way membership hash function. It allows users to certify that potential candidates are a member of a certain set without revealing the individual members of the set. This concept was formally introduced by Josh Benaloh and Michael de Mare in 1993. [1] [2]

Contents

Formal definitions

There are several formal definitions which have been proposed in the literature. This section lists them by proposer, in roughly chronological order. [2]

Benaloh and de Mare (1993)

Benaloh and de Mare define a one-way hash function as a family of functions which satisfy the following three properties: [1] [2]

  1. For all , one can compute in time . (Here the "poly" symbol refers to an unspecified, but fixed, polynomial.)
  2. No probabilistic polynomial-time algorithm will, for sufficiently large , map the inputs , find a value such that with more than negligible probability.
  3. For all , one has . (A function that satisfies this property is called quasi-commutative.)

(With the first two properties, one recovers the normal definition of a cryptographic hash function.)

From such a function, one defines the "accumulated hash" of a set and starting value w.r.t. a value to be . The result, does not depend on the order of elements because is quasi-commutative. [1] [2]

If belong to some users of a cryptosystem, then everyone can compute the accumulated value Also, the user of can compute the partial accumulated value of . Then, So the user can provide the pair to any other part, in order to authenticate .

Barić and Pfitzmann (1997)

The basic functionality of a quasi-commutative hash function is not immediate from the definition. To fix this, Barić and Pfitzmann defined a slightly more general definition, which is the notion of an accumulator scheme as consisting of the following components: [2] [3]

  1. Gen: a probabilistic algorithm that takes in two parameters (the security parameter and the number of values that can be securely accumulated, respectively), and returns an appropriate key .
  2. Eval: a probabilistic algorithm that takes in a key and accumulation set , where , and returning an accumulated value and auxiliary information . We insist that Eval must be deterministic for .
  3. Wit: a probabilistic algorithm that takes in a key , a value , an accumulated value of some set , and some auxiliary information , and returns either a witness or the special symbol . We insist that, if , that Wit returns a witness, and that Wit otherwise returns .
  4. Ver: a deterministic algorithm that takes in a key , a value , a witness , and an accumulated value , and returns a Yes/No value. We insist that if was generated from running Wit on a tuple , where were generated from running Eval on some , and where was chosen arbitrarily and was chosen from running Gen, that Ver always return Yes.

It is relatively easy to see that one can define an accumulator scheme from any quasi-commutative hash function, using the technique shown above. [2]

Camenisch and Lysyanskaya (2002)

One observes that, for many applications, the set of accumulated values will change many times. Naïvely, one could completely redo the accumulator calculation every time; however, this may be inefficient, especially if our set is very large and the change is very small. To formalize this intuition, Camenish and Lysyanskaya defined a dynamic accumulator scheme to consist of the 4 components of an ordinary accumulator scheme, plus three more: [2] [4]

  1. Add: a (possibly probabilistic) algorithm that takes in a key , an accumulated value , and another value to accumulate , and returns a new accumulated value and auxiliary information . We insist that if was generated by accumulating some set , then must be as if it were generated by accumulating the set .
  2. Del: a (possibly probabilistic) algorithm that takes in a key , an accumulated value , and another value to accumulate , and returns a new accumulated value and auxiliary information . We insist that if was generated by accumulating some set , then must be as if it were generated by accumulating the set .
  3. Upd: a deterministic algorithm that takes in the key , a value , a witness , the accumulated value , and auxiliary information , and returns a new witness . We insist that if was generated by Gen, is part of a set , is a witness for being a member of , and is an accumulated value for , and was generated by running Add or Del, then will be a witness for being a member of the new set.

Fazio and Nicolosi note that since Add, Del, and Upd can be simulated by rerunning Eval and Wit, this definition does not add any fundamentally new functionality. [2]

Examples

One example is multiplication over large prime numbers. This is a cryptographic accumulator, since it takes superpolynomial time to factor a composite number (at least according to conjecture), but it takes only a small amount of time (polynomial in size) to divide a prime into an integer to check if it is one of the factors and/or to factor it out. New members may be added or subtracted to the set of factors by multiplying or factoring out the number respectively. In this system, two accumulators that have accumulated a single shared prime can have it trivially discovered by calculating their GCD, even without prior knowledge of the prime (which would otherwise require prime factorization of the accumulator to discover).[ citation needed ]

More practical accumulators use a quasi-commutative hash function, so that the size of the accumulator does not grow with the number of members. For example, Benaloh and de Mare propose a cryptographic accumulator inspired by RSA: the quasi-commutative function for some composite number . They recommend to choose to be a rigid integer (i.e. the product of two safe primes). [1] Barić and Pfitzmann proposed a variant where was restricted to be prime and at most (this constant is very close to , but does not leak information about the prime factorization of ). [2] [3]

David Naccache observed in 1993 that is quasi-commutative for all constants , generalizing the previous RSA-inspired cryptographic accumulator. Naccache also noted that the Dickson polynomials are quasi-commutative in the degree, but it is unknown whether this family of functions is one-way. [1]

In 1996, Nyberg constructed an accumulator which is provably information-theoretically secure in the random oracle model. Choosing some upper limit for the number of items that can be securely accumulated and the security parameter, define the constant to be an integer multiple of (so that one can write ) and let be some cryptographically secure hash function. Choose a key as a random -bit bitstring. Then, to accumulate using Nyberg's scheme, use the quasi-commutative hash function , where is the bitwise and operation and is the function that interprets its input as a sequence of -bit bitstrings of length , replaces every all-zero bitstring with a single 0 and every other bitstring with a 1, and outputs the result. [2] [5]

Applications

Haber and Stornetta showed in 1990 that accumulators can be used to timestamp documents through cryptographic chaining. (This concept anticipates the modern notion of a cryptographic blockchain.) [1] [2] [6] Benaloh and de Mare proposed an alternative scheme in 1991 based on discretizing time into rounds. [1] [7]

Benaloh and de Mare showed that accumulators can be used so that a large group of people can recognize each other at a later time (which Fazio and Nicolosi call an "ID Escrow" situation). Each person selects a representing their identity, and the group collectively selects a public accumulator and a secret . Then, the group publishes or saves the hash function and the accumulated hash of all the group's identities w.r.t the secret and public accumulator; simultaneously, each member of the group keeps both its identity value and the accumulated hash of all the group's identities except that of the member. (If the large group of people do not trust each other, or if the accumulator has a cryptographic trapdoor as in the case of the RSA-inspired accumulator, then they can compute the accumulated hashes by secure multiparty computation.) To verify that a claimed member did indeed belong to the group later, they present their identity and personal accumulated hash (or a zero-knowledge proof thereof); by accumulating the identity of the claimed member and checking it against the accumulated hash of the entire group, anyone can verify a member of the group. [1] [2] With a dynamic accumulator scheme, it is additionally easy to add or remove members afterward. [2] [4]

Cryptographic accumulators can also be used to construct other cryptographically secure data structures:

The concept has received renewed interest due to the Zerocoin add on to bitcoin, which employs cryptographic accumulators to eliminate trackable linkage in the bitcoin blockchain, which would make transactions anonymous and more private. [10] [11] [12] More concretely, to mint (create) a Zerocoin, one publishes a coin and a cryptographic commitment to a serial number with a secret random value (which all users will accept as long as it is correctly formatted); to spend (reclaim) a Zerocoin, one publishes the Zerocoin's serial number along with a non-interactive zero-knowledge proof that they know of some published commitment that relates to the claimed serial number, then claims the coin (which all users will accept as long as the NIZKP is valid and the serial number has not appeared before). [10] [11] Since the initial proposal of Zerocoin, it has been succeeded by the Zerocash protocol and is currently being developed into Zcash, a fully-fledged[ weasel words ] digital currency. [13] [14]

See also

Related Research Articles

<span class="mw-page-title-main">Hash function</span> Mapping arbitrary data to fixed-size values

A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.

<span class="mw-page-title-main">Hash table</span> Associative array for storing key-value pairs

In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values. A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored.

<span class="mw-page-title-main">Monoid</span> Algebraic structure with an associative operation and an identity element

In abstract algebra, a branch of mathematics, a monoid is a set equipped with an associative binary operation and an identity element. For example, the nonnegative integers with addition form a monoid, the identity element being 0.

In mathematics, in particular abstract algebra, a graded ring is a ring such that the underlying additive group is a direct sum of abelian groups such that . The index set is usually the set of nonnegative integers or the set of integers, but can be any monoid. The direct sum decomposition is usually referred to as gradation or grading.

A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.

In cryptography, a Schnorr signature is a digital signature produced by the Schnorr signature algorithm that was described by Claus Schnorr. It is a digital signature scheme known for its simplicity, among the first whose security is based on the intractability of certain discrete logarithm problems. It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082 which expired in February 2008.

A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.

KCDSA is a digital signature algorithm created by a team led by the Korea Internet & Security Agency (KISA). It is an ElGamal variant, similar to the Digital Signature Algorithm and GOST R 34.10-94. The standard algorithm is implemented over , but an elliptic curve variant (EC-KCDSA) is also specified.

The ElGamal signature scheme is a digital signature scheme which is based on the difficulty of computing discrete logarithms. It was described by Taher Elgamal in 1985.

In cryptography, a Lamport signature or Lamport one-time signature scheme is a method for constructing a digital signature. Lamport signatures can be built from any cryptographically secure one-way function; usually a cryptographic hash function is used.

In mathematics and computing, universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property. This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known, and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.

A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakly random entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.

An important aspect in the study of elliptic curves is devising effective ways of counting points on the curve. There have been several approaches to do so, and the algorithms devised have proved to be useful tools in the study of various fields such as number theory, and more recently in cryptography and Digital Signature Authentication. While in number theory they have important consequences in the solving of Diophantine equations, with respect to cryptography, they enable us to make effective use of the difficulty of the discrete logarithm problem (DLP) for the group , of elliptic curves over a finite field , where q = pk and p is a prime. The DLP, as it has come to be known, is a widely used approach to public key cryptography, and the difficulty in solving this problem determines the level of security of the cryptosystem. This article covers algorithms to count points on elliptic curves over fields of large characteristic, in particular p > 3. For curves over fields of small characteristic more efficient algorithms based on p-adic methods exist.

In cryptography, Very Smooth Hash (VSH) is a provably secure cryptographic hash function invented in 2005 by Scott Contini, Arjen Lenstra and Ron Steinfeld. Provably secure means that finding collisions is as difficult as some known hard mathematical problem. Unlike other provably secure collision-resistant hashes, VSH is efficient and usable in practice. Asymptotically, it only requires a single multiplication per log(n) message-bits and uses RSA-type arithmetic. Therefore, VSH can be useful in embedded environments where code space is limited.

Linked timestamping is a type of trusted timestamping where issued time-stamps are related to each other.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

In mathematics, the quasi-commutative property is an extension or generalization of the general commutative property. This property is used in specific applications with various definitions.

Non-commutative cryptography is the area of cryptology where the cryptographic primitives, methods and systems are based on algebraic structures like semigroups, groups and rings which are non-commutative. One of the earliest applications of a non-commutative algebraic structure for cryptographic purposes was the use of braid groups to develop cryptographic protocols. Later several other non-commutative structures like Thompson groups, polycyclic groups, Grigorchuk groups, and matrix groups have been identified as potential candidates for cryptographic applications. In contrast to non-commutative cryptography, the currently widely used public-key cryptosystems like RSA cryptosystem, Diffie–Hellman key exchange and elliptic curve cryptography are based on number theory and hence depend on commutative algebraic structures.

Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983, as a universal tool to data compression, but recently have been used to model data in different areas such as biology, linguistics and music.

LSH is a cryptographic hash function designed in 2014 by South Korea to provide integrity in general-purpose software environments such as PCs and smart devices. LSH is one of the cryptographic algorithms approved by the Korean Cryptographic Module Validation Program (KCMVP). And it is the national standard of South Korea.

References

  1. 1 2 3 4 5 6 7 8 Benaloh, Josh; de Mare, Michael (1994). "One-Way Accumulators: A Decentralized Alternative to Digital Signatures" (PDF). Advances in Cryptology — EUROCRYPT '93. Lecture Notes in Computer Science. Vol. 765. pp. 274–285. doi: 10.1007/3-540-48285-7_24 . ISBN   978-3-540-57600-6 . Retrieved 3 May 2021.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Fazio, Nelly; Nicolosi, Antonio (2002). "Cryptographic Accumulators: Definitions, Constructions and Applications" (PDF). Archived (PDF) from the original on 3 June 2006. Retrieved 30 January 2021.
  3. 1 2 3 Barić, Niko; Pfitzmann, Birgit (1997). "Collision-Free Accumulators and Fail-Stop Signature Schemes Without Trees". In Fumy, Walter (ed.). Advances in Cryptology — EUROCRYPT '97. Lecture Notes in Computer Science. Vol. 1233. Berlin, Heidelberg: Springer. pp. 480–494. doi: 10.1007/3-540-69053-0_33 . ISBN   978-3-540-69053-5.
  4. 1 2 Camenisch, Jan; Lysyanskaya, Anna (2002). "Dynamic Accumulators and Application to Efficient Revocation of Anonymous Credentials". In Yung, Moti (ed.). Advances in Cryptology — CRYPTO 2002. Lecture Notes in Computer Science. Vol. 2442. Berlin, Heidelberg: Springer. pp. 61–76. doi: 10.1007/3-540-45708-9_5 . ISBN   978-3-540-45708-4.
  5. Nyberg, Kaisa (1996). "Fast accumulated hashing". In Gollmann, Dieter (ed.). Fast Software Encryption. Lecture Notes in Computer Science. Vol. 1039. Berlin, Heidelberg: Springer. pp. 83–87. doi: 10.1007/3-540-60865-6_45 . ISBN   978-3-540-49652-6.
  6. Haber, Stuart; Stornetta, W. Scott (1991). "How to Time-Stamp a Digital Document". In Menezes, Alfred J.; Vanstone, Scott A. (eds.). Advances in Cryptology — CRYPT0' 90. Lecture Notes in Computer Science. Vol. 537. Berlin, Heidelberg: Springer. pp. 437–455. doi: 10.1007/3-540-38424-3_32 . ISBN   978-3-540-38424-3.
  7. Benaloh, J.; de Mare, M. (August 1991). "Efficient Broadcast Time-Stamping". Microsoft. CiteSeerX   10.1.1.38.9199 . MSR-TR 91-1.
  8. Goodrich, Michael T.; Tamassia, Roberto; Hasić, Jasminka (11 November 2001). "An Efficient Dynamic and Distributed Cryptographic Accumulator" (PDF). Information Security. Lecture Notes in Computer Science. Vol. 2433. pp. 372–388. doi:10.1007/3-540-45811-5_29. ISBN   978-3-540-44270-7. Archived from the original on 13 March 2003.
  9. Papamanthou, Charalampos; Tamassia, Roberto; Triandopoulos, Nikos (18 August 2009). "Cryptographic Accumulators for Authenticated Hash Tables". Cryptology ePrint Archive. CiteSeerX   10.1.1.214.7737 .
  10. 1 2 Ian, Miers; Garman, Christina; Green, Matthew; Rubin, Aviel D. (2013). "Zerocoin: Anonymous Distributed E-Cash from Bitcoin" (PDF). 2013 IEEE Symposium on Security and Privacy. pp. 397–411. doi:10.1109/SP.2013.34. ISBN   978-0-7695-4977-4. S2CID   9194314 . Retrieved 3 May 2021.
  11. 1 2 Green, Matthew (11 April 2013). "Zerocoin: making Bitcoin anonymous". A Few Thoughts on Cryptographic Engineering. Archived from the original on 21 May 2014. Retrieved 3 May 2021.
  12. Zerocoin: Anonymous Distributed E-Cash from Bitcoin Archived 8 February 2014 at the Wayback Machine
  13. "Zerocoin Project". zerocoin.org. Retrieved 4 May 2021.
  14. "Privacy-protecting digital currency | Zcash". Zcash. Retrieved 4 May 2021.