Ring learning with errors

Last updated

In post-quantum cryptography, ring learning with errors (RLWE) is a computational problem which serves as the foundation of new cryptographic algorithms, such as NewHope, designed to protect against cryptanalysis by quantum computers and also to provide the basis for homomorphic encryption. Public-key cryptography relies on construction of mathematical problems that are believed to be hard to solve if no further information is available, but are easy to solve if some information used in the problem construction is known. Some problems of this sort that are currently used in cryptography are at risk of attack if sufficiently large quantum computers can ever be built, so resistant problems are sought. Homomorphic encryption is a form of encryption that allows computation on ciphertext, such as arithmetic on numeric values stored in an encrypted database.

Contents

RLWE is more properly called learning with errors over rings and is simply the larger learning with errors (LWE) problem specialized to polynomial rings over finite fields. [1] Because of the presumed difficulty of solving the RLWE problem even on a quantum computer, RLWE based cryptography may form the fundamental base for public-key cryptography in the future just as the integer factorization and discrete logarithm problem have served as the base for public key cryptography since the early 1980s. [2] An important feature of basing cryptography on the ring learning with errors problem is the fact that the solution to the RLWE problem can be used to solve a version of the shortest vector problem (SVP) in a lattice (a polynomial-time reduction from this SVP problem to the RLWE problem has been presented [1] ).

Background

The security of modern cryptography, in particular public-key cryptography, is based on the assumed intractability of solving certain computational problems if the size of the problem is large enough and the instance of the problem to be solved is chosen randomly. The classic example that has been used since the 1970s is the integer factorization problem. It is believed that it is computationally intractable to factor the product of two prime numbers if those prime numbers are large enough and chosen at random. [3] As of 2015 research has led to the factorization of the product of two 384-bit primes but not the product of two 512-bit primes. Integer factorization forms the basis of the widely used RSA cryptographic algorithm.

The ring learning with errors (RLWE) problem is built on the arithmetic of polynomials with coefficients from a finite field. [1] A typical polynomial is expressed as:

Polynomials can be added and multiplied in the usual fashion. In the RLWE context the coefficients of the polynomials and all operations involving those coefficients will be done in a finite field, typically the field for a prime integer . The set of polynomials over a finite field with the operations of addition and multiplication forms an infinite polynomial ring (). The RLWE context works with a finite quotient ring of this infinite ring. The quotient ring is typically the finite quotient (factor) ring formed by reducing all of the polynomials in modulo an irreducible polynomial . This finite quotient ring can be written as though many authors write . [1]

If the degree of the polynomial is , the quotient ring becomes the ring of polynomials of degree less than modulo with coefficients from . The values , , together with the polynomial partially define the mathematical context for the RLWE problem.

Another concept necessary for the RLWE problem is the idea of "small" polynomials with respect to some norm. The typical norm used in the RLWE problem is known as the infinity norm (also called the uniform norm). The infinity norm of a polynomial is simply the largest coefficient of the polynomial when these coefficients are viewed as integers. Hence, states that the infinity norm of the polynomial is . Thus is the largest coefficient of .

The final concept necessary to understand the RLWE problem is the generation of random polynomials in and the generation of "small" polynomials . A random polynomial is easily generated by simply randomly sampling the coefficients of the polynomial from , where is typically represented as the set .

Randomly generating a "small" polynomial is done by generating the coefficients of the polynomial from in a way that either guarantees or makes very likely small coefficients. When is a prime integer, there are two common ways to do this:

  1. Using Uniform Sampling – The coefficients of the small polynomial are uniformly sampled from a set of small coefficients. Let be an integer that is much less than . If we randomly choose coefficients from the set: the polynomial will be small with respect to the bound ().
  2. Using discrete Gaussian sampling – For an odd value for , the coefficients of the polynomial are randomly chosen by sampling from the set according to a discrete Gaussian distribution with mean and distribution parameter . The references describe in full detail how this can be accomplished. It is more complicated than uniform sampling but it allows for a proof of security of the algorithm. The paper "Sampling from Discrete Gaussians for Lattice-Based Cryptography on a Constrained Device" by Dwarakanath and Galbraith provides an overview of this problem. [4]

The RLWE Problem

The RLWE problem can be stated in two different ways: a "search" version and a "decision" version. Both begin with the same construction. Let

The Search version entails finding the unknown polynomial given the list of polynomial pairs .

The Decision version of the problem can be stated as follows. Given a list of polynomial pairs , determine whether the polynomials were constructed as or were generated randomly from with coefficients from all of .

The difficulty of this problem is parameterized by the choice of the quotient polynomial (), its degree (), the field (), and the smallness bound (). In many RLWE based public key algorithms the private key will be a pair of small polynomials and . The corresponding public key will be a pair of polynomials , selected randomly from , and the polynomial . Given and , it should be computationally infeasible to recover the polynomial .

Security Reduction

In cases where the polynomial is a cyclotomic polynomial, the difficulty of solving the search version of RLWE problem is equivalent to finding a short vector (but not necessarily the shortest vector) in an ideal lattice formed from elements of represented as integer vectors. [1] This problem is commonly known as the Approximate Shortest Vector Problem (α-SVP) and it is the problem of finding a vector shorter than α times the shortest vector. The authors of the proof for this equivalence write:

"... we give a quantum reduction from approximate SVP (in the worst case) on ideal lattices in to the search version of ring-LWE, where the goal is to recover the secret (with high probability, for any ) from arbitrarily many noisy products." [1]

In that quote, The ring is and the ring is .

The α-SVP in regular lattices is known to be NP-hard due to work by Daniele Micciancio in 2001, although not for values of α required for a reduction to general learning with errors problem. [5] However, there is not yet a proof to show that the difficulty of the α-SVP for ideal lattices is equivalent to the average α-SVP. Rather we have a proof that if there are any α-SVP instances that are hard to solve in ideal lattices then the RLWE Problem will be hard in random instances. [1]

Regarding the difficulty of Shortest Vector Problems in Ideal Lattices, researcher Michael Schneider writes, "So far there is no SVP algorithm making use of the special structure of ideal lattices. It is widely believed that solving SVP (and all other lattice problems) in ideal lattices is as hard as in regular lattices." [6] The difficulty of these problems on regular lattices is provably NP-hard. [5] There are, however, a minority of researchers who do not believe that ideal lattices share the same security properties as regular lattices. [7]

Peikert believes that these security equivalences make the RLWE problem a good basis for future cryptography. He writes: "There is a mathematical proof that theonlyway to break the cryptosystem (within some formal attack model) on its random instances is by being able to solve the underlying lattice problem in theworst case" (emphasis in the original). [8]

RLWE Cryptography

A major advantage that RLWE based cryptography has over the original learning with errors (LWE) based cryptography is found in the size of the public and private keys. RLWE keys are roughly the square root of keys in LWE. [1] For 128 bits of security an RLWE cryptographic algorithm would use public keys around 7000 bits in length. [9] The corresponding LWE scheme would require public keys of 49 million bits for the same level of security. [1] [ failed verification ] On the other hand, RLWE keys are larger than the keys sizes for currently used public key algorithms like RSA and Elliptic Curve Diffie-Hellman which require public key sizes of 3072 bits and 256 bits, respectively, to achieve a 128-bit level of security. From a computational standpoint, however, RLWE algorithms have been shown to be the equal of or better than existing public key systems. [10]

Three groups of RLWE cryptographic algorithms exist:

Ring learning with errors key exchanges (RLWE-KEX)

The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The basic idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper [11] appeared in 2012 after a provisional patent application was filed in 2012.

In 2014, Peikert [12] presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional 1 bit signal for rounding in Ding's construction is also utilized. An RLWE version of the classic MQV variant of a Diffie-Hellman key exchange was later published by Zhang et al. [13] The security of both key exchanges is directly related to the problem of finding approximate short vectors in an ideal lattice.

Ring learning with errors signature (RLWE-SIG)

A RLWE version of the classic Feige–Fiat–Shamir Identification protocol was created and converted to a digital signature in 2011 by Lyubashevsky. [14] The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography – A Signature Scheme for Embedded Systems." [15] These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems. [16]

Ring learning with errors homomorphic encryption (RLWE-HOM)

The purpose of homomorphic encryption is to allow the computations on sensitive data to occur on computing devices that should not be trusted with the data. These computing devices are allowed to process the ciphertext which is output from a homomorphic encryption. In 2011, Brakersky and Vaikuntanathan, published "Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages" which builds a homomorphic encryption scheme directly on the RLWE problem. [17]

Related Research Articles

Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys compared to non-EC cryptography to provide equivalent security.

<span class="mw-page-title-main">Lattice (group)</span> Periodic set of points

In geometry and group theory, a lattice in the real coordinate space is an infinite set of points in this space with the properties that coordinate-wise addition or subtraction of two points in the lattice produces another lattice point, that the lattice points are all separated by some minimum distance, and that every point in the space is within some maximum distance of a lattice point. Closure under addition and subtraction means that a lattice must be a subgroup of the additive group of the points in the space, and the requirements of minimum and maximum distance can be summarized by saying that a lattice is a Delone set. More abstractly, a lattice can be described as a free abelian group of dimension which spans the vector space . For any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, and every lattice can be formed from a basis in this way. A lattice may be viewed as a regular tiling of a space by a primitive cell.

The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing n-th residue classes is believed to be computationally difficult. The decisional composite residuosity assumption is the intractability hypothesis upon which this cryptosystem is based.

The NTRUEncrypt public key cryptosystem, also known as the NTRU encryption algorithm, is an NTRU lattice-based alternative to RSA and elliptic curve cryptography (ECC) and is based on the shortest vector problem in a lattice.

The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis with n-dimensional integer coordinates, for a lattice L with , the LLL algorithm calculates an LLL-reduced lattice basis in time

Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve.

Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that produced had the operations been performed on the unencrypted data. Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted.

In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields. The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems.

In computational algebra, the Cantor–Zassenhaus algorithm is a method for factoring polynomials over finite fields.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions support important standards of post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.

In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector space or free modules are generally considered.

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

Post-quantum cryptography (PQC), sometimes referred to as quantum-proof, quantum-safe, or quantum-resistant, is the development of cryptographic algorithms that are thought to be secure against a cryptanalytic attack by a quantum computer. The problem with popular algorithms currently used in the market is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems could be easily solved on a sufficiently powerful quantum computer running Shor's algorithm or even faster and less demanding alternatives.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

Supersingular isogeny Diffie–Hellman key exchange is an insecure proposal for a post-quantum cryptographic algorithm to establish a secret key between two parties over an untrusted communications channel. It is analogous to the Diffie–Hellman key exchange, but is based on walks in a supersingular isogeny graph and was designed to resist cryptanalytic attack by an adversary in possession of a quantum computer. Before it was broken, SIDH boasted one of the smallest key sizes of all post-quantum key exchanges; with compression, SIDH used 2688-bit public keys at a 128-bit quantum security level. SIDH also distinguishes itself from similar systems such as NTRU and Ring-LWE by supporting perfect forward secrecy, a property that prevents compromised long-term keys from compromising the confidentiality of old communication sessions. These properties seemed to make SIDH a natural candidate to replace Diffie–Hellman (DHE) and elliptic curve Diffie–Hellman (ECDHE), which are widely used in Internet communication. However, SIDH is vulnerable to a devastating key-recovery attack published in July 2022 and is therefore insecure. The attack does not require a quantum computer.

Digital signatures are a means to protect digital information from intentional modification and to authenticate the source of digital information. Public key cryptography provides a rich set of different cryptographic algorithms the create digital signatures. However, the primary public key signatures currently in use will become completely insecure if scientists are ever able to build a moderately sized quantum computer. Post quantum cryptography is a class of cryptographic algorithms designed to be resistant to attack by a quantum cryptography. Several post quantum digital signature algorithms based on hard problems in lattices are being created replace the commonly used RSA and elliptic curve signatures. A subset of these lattice based scheme are based on a problem known as Ring learning with errors. Ring learning with errors based digital signatures are among the post quantum signatures with the smallest public key and signature sizes

In cryptography, a public key exchange algorithm is a cryptographic algorithm which allows two parties to create and share a secret key, which they can use to encrypt messages between themselves. The ring learning with errors key exchange (RLWE-KEX) is one of a new class of public key exchange algorithms that are designed to be secure against an adversary that possesses a quantum computer. This is important because some public key algorithms in use today will be easily broken by a quantum computer if such computers are implemented. RLWE-KEX is one of a set of post-quantum cryptographic algorithms which are based on the difficulty of solving certain mathematical problems involving lattices. Unlike older lattice based cryptographic algorithms, the RLWE-KEX is provably reducible to a known hard problem in lattices.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

HEAAN is an open source homomorphic encryption (HE) library which implements an approximate HE scheme proposed by Cheon, Kim, Kim and Song (CKKS). The first version of HEAAN was published on GitHub on 15 May 2016, and later a new version of HEAAN with a bootstrapping algorithm was released. Currently, the latest version is Version 2.1.

References

  1. 1 2 3 4 5 6 7 8 9 Lyubashevsky, Vadim; Peikert, Chris; Regev, Oded (2012). "On Ideal Lattices and Learning with Errors Over Rings". Cryptology ePrint Archive.
  2. Peikert, Chris (2014). "Lattice Cryptography for the Internet". In Mosca, Michele (ed.). Post-Quantum Cryptography. Lecture Notes in Computer Science. Vol. 8772. Springer International Publishing. pp. 197–219. CiteSeerX   10.1.1.800.4743 . doi:10.1007/978-3-319-11659-4_12. ISBN   978-3-319-11658-7. S2CID   8123895.
  3. Shor, Peter (20 November 1994). Algorithms for quantum computation: discrete logarithms and factoring. 35th Annual Symposium on Foundations of Computer Science. Santa Fe: IEEE. doi:10.1109/SFCS.1994.365700. ISBN   0-8186-6580-7. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems.
  4. Dwarakanath, Nagarjun C.; Galbraith, Steven D. (2014-03-18). "Sampling from discrete Gaussians for lattice-based cryptography on a constrained device". Applicable Algebra in Engineering, Communication and Computing. 25 (3): 159–180. CiteSeerX   10.1.1.716.376 . doi:10.1007/s00200-014-0218-3. ISSN   0938-1279. S2CID   13718364.
  5. 1 2 Micciancio, D. (January 1, 2001). "The Shortest Vector in a Lattice is Hard to Approximate to within Some Constant". SIAM Journal on Computing. 30 (6): 2008–2035. CiteSeerX   10.1.1.93.6646 . doi:10.1137/S0097539700373039. ISSN   0097-5397.
  6. Schneider, Michael (2011). "Sieving for Shortest Vectors in Ideal Lattices". Cryptology ePrint Archive.
  7. "cr.yp.to: 2014.02.13: A subfield-logarithm attack against ideal lattices". blog.cr.yp.to. Retrieved 2015-07-03.
  8. "What does GCHQ's "cautionary tale" mean for lattice cryptography?". www.eecs.umich.edu. Archived from the original on 2016-03-17. Retrieved 2016-01-05.
  9. Singh, Vikram (2015). "A Practical Key Exchange for the Internet using Lattice Cryptography". Cryptology ePrint Archive.
  10. Verbauwhede, Ruan de Clercq, Sujoy Sinha Roy, Frederik Vercauteren, Ingrid (2014). "Efficient Software Implementation of Ring-LWE Encryption". Cryptology ePrint Archive.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. Ding, Jintai; Xie, Xiang; Lin, Xiaodong (2012-01-01). "A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem". Cryptology ePrint Archive.
  12. Peikert, Chris (2014-01-01). "Lattice Cryptography for the Internet". Cryptology ePrint Archive.
  13. Zhang, Jiang; Zhang, Zhenfeng; Ding, Jintai; Snook, Michael; Dagdelen, Özgür (2014). "Authenticated Key Exchange from Ideal Lattices". Cryptology ePrint Archive.
  14. Lyubashevsky, Vadim (2011). "Lattice Signatures Without Trapdoors". Cryptology ePrint Archive.
  15. Güneysu, Tim; Lyubashevsky, Vadim; Pöppelmann, Thomas (2012). Prouff, Emmanuel; Schaumont, Patrick (eds.). Practical Lattice-Based Cryptography: A Signature Scheme for Embedded Systems. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 530–547. doi:10.1007/978-3-642-33027-8_31. ISBN   978-3-642-33026-1.
  16. "BLISS Signature Scheme". bliss.di.ens.fr. Retrieved 2015-07-04.
  17. Brakerski, Zvika; Vaikuntanathan, Vinod (2011). Rogaway, Phillip (ed.). Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 505–524. doi:10.1007/978-3-642-22792-9_29. ISBN   978-3-642-22791-2.