Distributed key generation

Last updated

Distributed key generation (DKG) is a cryptographic process in which multiple parties contribute to the calculation of a shared public and private key set. Unlike most public key encryption models, distributed key generation does not rely on Trusted Third Parties. [1] Instead, the participation of a threshold of honest parties determines whether a key pair can be computed successfully. [2] Distributed key generation prevents single parties from having access to a private key. The involvement of many parties requires Distributed key generation to ensure secrecy in the presence of malicious contributions to the key calculation. [1]

Contents

Distributed Key Generation is commonly used to decrypt shared ciphertexts or create group digital signatures. [2]

History

Distributed key generation protocol was first specified by Torben Pedersen in 1991. This first model depended on the security of the Joint-Feldman Protocol for verifiable secret sharing during the secret sharing process. [3]

In 1999, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, and Tal Rabin produced a series of security proofs demonstrating that Feldman verifiable secret sharing was vulnerable to malicious contributions to Pedersen's distributed key generator that would leak information about the shared private key. [4] [5] The same group also proposed an updated distributed key generation scheme preventing malicious contributions from impacting the value of the private key.

Methods

The distributed key generation protocol specified by Gennaro, Jarecki, Krawczyk, and Rabin assumes that a group of players has already been established by an honest party prior to the key generation. It also assumes the communication between parties is synchronous. [5]

  1. All parties use Pedersen's verifiable secret sharing protocol to share the results of two random polynomial functions.
  2. Every party then verifies all the shares they received. If verification fails, the recipient broadcasts a complaint for the party whose share failed. Each accused party then broadcasts their shares. Each party then has the opportunity to verify the broadcast shares or disqualify accused parties. All parties generate a common list of non-disqualified parties.
  3. Each non-disqualified party broadcasts a set of values constructed by raising a common generator to the power of each value used in one polynomial in Part 1.
  4. These broadcast values are verified by each party similarly to as in Part 2. When a verification fails, the party now broadcasts both the values received in Part 1 and the values received in Part 3. For each party with verifiable complaints, all other parties reconstruct their own value sets in order to eliminate disqualified contributions.
  5. The group computes the private key as the product of every qualified contribution (each qualified party's random polynomial evaluated at 0). [5]

Avoiding the Synchrony Assumption

In 2009, Aniket Kate and Ian Goldberg presented a Distributed key generation protocol suitable for use over the Internet. [6] Unlike earlier constructions, this protocol does not require a broadcast channel or the synchronous communication assumption, and a ready-to-use library is available.

Robustness

In many circumstances, a robust distributed key generator is necessary. Robust generator protocols can reconstruct public keys in order to remove malicious shares even if malicious parties still remain in the qualified group during the reconstruction phase. [5] For example, robust multi-party digital signatures can tolerate a number of malicious users roughly proportionate to the length of the modulus used during key generation. [7]

Sparse Evaluated DKG

Distributed key generators can implement a sparse evaluation matrix in order to improve efficiency during verification stages. Sparse evaluation can improve run time from (where is the number of parties and is the threshold of malicious users) to . Instead of robust verification, sparse evaluation requires that a small set of the parties verify a small, randomly picked set of shares. This results in a small probability that the key generation will fail in the case that a large number of malicious shares are not chosen for verification. [8]

Applications

Distributed key generation and distributed key cryptography are rarely applied over the internet because of the reliance on synchronous communication. [5]

Distributed key cryptography is useful in key escrow services where a company can meet a threshold to decrypt a ciphertext version of private key. This way a company can require multiple employees to recover a private key without giving the escrow service a plaintext copy. [1]

Distributed key generation is also useful in server-side password authentication. If password hashes are stored on a single server, a breach in the server would result in all the password hashes being available for attackers to analyze offline. Variations of distributed key generation can authenticate user passwords across multiple servers and eliminate single points of failure. [9] [10]

Distributed key generation is more commonly used for group digital signatures. This acts as a form of voting, where a threshold of group members would have to participate in order for the group to digitally sign a document. [2]

Related Research Articles

<span class="mw-page-title-main">Public-key cryptography</span> Cryptographic system with public and private keys

Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.

Articles related to cryptography include:

ID-based encryption, or identity-based encryption (IBE), is an important primitive of ID-based cryptography. As such it is a type of public-key encryption in which the public key of a user is some unique information about the identity of the user. This means that a sender who has access to the public parameters of the system can encrypt a message using e.g. the text-value of the receiver's name or email address as a key. The receiver obtains its decryption key from a central authority, which needs to be trusted as it generates secret keys for every user.

A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.

<span class="mw-page-title-main">Cryptographic hash function</span> Hash function that is suitable for use in cryptography

A cryptographic hash function (CHF) is a hash algorithm that has special properties desirable for a cryptographic application:

Secure multi-party computation is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage and the adversary is outside the system of participants, the cryptography in this model protects participants' privacy from each other.

A key generator is a protocol or algorithm that is used in many cryptographic protocols to generate a sequence with many pseudo-random characteristics. This sequence is used as an encryption key at one end of communication, and as a decryption key at the other. One can implement a key generator in a system that aims to generate, distribute, and authenticate keys in a way that without the private key, one cannot access the information in the public end.

Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields.

MQV (Menezes–Qu–Vanstone) is an authenticated protocol for key agreement based on the Diffie–Hellman scheme. Like other authenticated Diffie–Hellman schemes, MQV provides protection against an active attacker. The protocol can be modified to work in an arbitrary finite group, and, in particular, elliptic curve groups, where it is known as elliptic curve MQV (ECMQV).

Kleptography is the study of stealing information securely and subliminally. The term was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology – Crypto '96. Kleptography is a subfield of cryptovirology and is a natural extension of the theory of subliminal channels that was pioneered by Gus Simmons while at Sandia National Laboratory. A kleptographic backdoor is synonymously referred to as an asymmetric backdoor. Kleptography encompasses secure and covert communications through cryptosystems and cryptographic protocols. This is reminiscent of, but not the same as steganography that studies covert communications through graphics, video, digital audio data, and so forth.

In cryptography, a password-authenticated key agreement method is an interactive method for two or more parties to establish cryptographic keys based on one or more party's knowledge of a password.

In cryptography, forward secrecy (FS), also known as perfect forward secrecy (PFS), is a feature of specific key-agreement protocols that gives assurances that session keys will not be compromised even if long-term secrets used in the session key exchange are compromised. For HTTPS, the long-term secret is typically the private key of the server. Forward secrecy protects past sessions against future compromises of keys or passwords. By generating a unique session key for every session a user initiates, the compromise of a single session key will not affect any data other than that exchanged in the specific session protected by that particular key. This by itself is not sufficient for forward secrecy which additionally requires that a long-term secret compromise does not affect the security of past session keys.

In cryptography, a verifiable random function (VRF) is a public-key pseudorandom function that provides proofs that its outputs were calculated correctly. The owner of the secret key can compute the function value as well as an associated proof for any input value. Everyone else, using the proof and the associated public key, can check that this value was indeed calculated correctly, yet this information cannot be used to find the secret key.

In cryptography, a secret sharing scheme is verifiable if auxiliary information is included that allows players to verify their shares as consistent. More formally, verifiable secret sharing ensures that even if the dealer is malicious there is a well-defined secret that the players can later reconstruct. The concept of verifiable secret sharing (VSS) was first introduced in 1985 by Benny Chor, Shafi Goldwasser, Silvio Micali and Baruch Awerbuch.

Digital credentials are the digital equivalent of paper-based credentials. Just as a paper-based credential could be a passport, a driver's license, a membership certificate or some kind of ticket to obtain some service, such as a cinema ticket or a public transport ticket, a digital credential is a proof of qualification, competence, or clearance that is attached to a person. Also, digital credentials prove something about their owner. Both types of credentials may contain personal information such as the person's name, birthplace, birthdate, and/or biometric information such as a picture or a finger print.

Secure two-party computation (2PC) a.k.a. Secure function evaluation is sub-problem of secure multi-party computation (MPC) that has received special attention by researchers because of its close relation to many cryptographic tasks. The goal of 2PC is to create a generic protocol that allows two parties to jointly compute an arbitrary function on their inputs without sharing the value of their inputs with the opposing party. One of the most well known examples of 2PC is Yao's Millionaires' problem, in which two parties, Alice and Bob, are millionaires who wish to determine who is wealthier without revealing their wealth. Formally, Alice has wealth , Bob has wealth , and they wish to compute without revealing the values or .

A threshold cryptosystem, the basis for the field of threshold cryptography, is a cryptosystem that protects information by encrypting it and distributing it among a cluster of fault-tolerant computers. The message is encrypted using a public key, and the corresponding private key is shared among the participating parties. With a threshold cryptosystem, in order to decrypt an encrypted message or to sign a message, several parties must cooperate in the decryption or signature protocol.

In cryptography, server-based signatures are digital signatures in which a publicly available server participates in the signature creation process. This is in contrast to conventional digital signatures that are based on public-key cryptography and public-key infrastructure. With that, they assume that signers use their personal trusted computing bases for generating signatures without any communication with servers.

<span class="mw-page-title-main">Tal Rabin</span> American cryptographer

Tal Rabin is a computer scientist and Professor of Computer and Information Science at the University of Pennsylvania. She was previously the head of Research at the Algorand Foundation and the head of the cryptography research group at IBM's Thomas J. Watson Research Center.

<span class="mw-page-title-main">Hugo Krawczyk</span> Argentine Israeli cryptographer

Hugo Krawczyk is an Argentine-Israeli cryptographer best known for co-inventing the HMAC message authentication algorithm and contributing in fundamental ways to the cryptographic architecture of central Internet standards, including IPsec, IKE, and SSL/TLS, in particular, both IKEv2 and TLS 1.3 use Krawczyk’s SIGMA protocol as the cryptographic core of their key exchange procedures. He has also contributed foundational work in the areas of threshold and proactive cryptosystems and searchable symmetric encryption, among others.

References

  1. 1 2 3 Kate, Aniket; Goldberg, Ian (2010). Distributed Private-Key Generators for Identity Based Cryptography. Security and Cryptography for Networks. Lecture Notes in Computer Science. Vol. 6280. pp. 436–453. CiteSeerX   10.1.1.389.4486 . doi:10.1007/978-3-642-15317-4_27. ISBN   978-3-642-15316-7.
  2. 1 2 3 Boldyreva, Alexandra (2003). "Threshold Signatures, Multisignatures and Blind Signatures Based on the Gap-Diffie-Hellman-Group Signature Scheme" (PDF). Public Key Cryptography — PKC 2003. Public Key Cryptography. Lecture Notes in Computer Science. Vol. 2567. pp. 31–46. doi:10.1007/3-540-36288-6_3. ISBN   978-3-540-00324-3.
  3. Pedersen, T. P. (1992). "Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing". Advances in Cryptology – CRYPTO '91. Lecture Notes in Computer Science. Vol. 576. pp. 129–140. doi:10.1007/3-540-46766-1_9. ISBN   978-3-540-55188-1.
  4. Gennaro, Rosario; Jarecki, Stanisław; Krawczyk, Hugo; Rabin, Tal (1999). "Secure distributed key generation for discrete-log based cryptosystems". Proceedings of the 17th International Conference on Theory and Application of Cryptographic Techniques. EUROCRYPT'99. Berlin, Heidelberg: Springer-Verlag: 295–310. ISBN   978-3-540-65889-4.
  5. 1 2 3 4 5 Gennaro, Rosario; Jarecki, Stanislaw; Krawczyk, Hugo; Rabin, Tal (24 May 2006). "Secure Distributed Key Generation for Discrete-Log Based Cryptosystems". Journal of Cryptology. 20 (1): 51–83. CiteSeerX   10.1.1.134.6445 . doi:10.1007/s00145-006-0347-3. S2CID   3331212.
  6. Kate, Aniket; Goldberg, Ian (2006). "Distributed Key Generation for the Internet". IEEE ICDCS. doi:10.1109/ICDCS.2009.21.
  7. Castelluccia, Claude; Jarecki, Stanisław; Kim, Jihye; Tsudik, Gene (2006). "Secure acknowledgment aggregation and multisignatures with limited robustness". Computer Networks. 50 (10): 1639–1652. doi:10.1016/j.comnet.2005.09.021.
  8. Canny, John; Sorkin, Steve (2004). "Practical Large-Scale Distributed Key Generation". Advances in Cryptology - EUROCRYPT 2004 (PDF). Lecture Notes in Computer Science. Vol. 3027. pp. 138–152. CiteSeerX   10.1.1.69.6028 . doi:10.1007/978-3-540-24676-3_9. ISBN   978-3-540-21935-4.
  9. MacKenzie, Philip; Shrimpton, Thomas; Marcus, Jakobsson (2006). "Threshold Password-authenticated Key Exchange". Journal of Cryptology. 19 (1): 27–66. CiteSeerX   10.1.1.101.6403 . doi:10.1007/s00145-005-0232-5. S2CID   1732140.
  10. Jarecki, Stanislaw; Kiayias, Aggelos; Krawczyk, Hugo (2014). "Round-Optimal Password-Protected Secret Sharing and T-PAKE in the Password-Only model" (PDF). Cryptology ePrint Archive. 650. Retrieved 5 November 2014.