Learning with errors

Last updated

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. [1] It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. [2] In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, [1] and thus to be useful in cryptography.

Contents

More precisely, the LWE problem is defined as follows. Let denote the ring of integers modulo and let denote the set of -vectors over . There exists a certain unknown linear function , and the input to the LWE problem is a sample of pairs , where and , so that with high probability . Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function , or some close approximation thereof, with high probability.

The LWE problem was introduced by Oded Regev in 2005 [3] (who won the 2018 Gödel Prize for this work), it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems, [3] [4] such as the ring learning with errors key exchange by Peikert. [5]

Definition

Denote by the additive group on reals modulo one. Let be a fixed vector. Let be a fixed probability distribution over . Denote by the distribution on obtained as follows.

  1. Pick a vector from the uniform distribution over ,
  2. Pick a number from the distribution ,
  3. Evaluate , where is the standard inner product in , the division is done in the field of reals (or more formally, this "division by " is notation for the group homomorphism mapping to ), and the final addition is in .
  4. Output the pair .

The learning with errors problem is to find , given access to polynomially many samples of choice from .

For every , denote by the one-dimensional Gaussian with zero mean and variance , that is, the density function is where , and let be the distribution on obtained by considering modulo one. The version of LWE considered in most of the results would be

Decision version

The LWE problem described above is the search version of the problem. In the decision version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from (practically, some discretized version of it). Regev [3] showed that the decision and search versions are equivalent when is a prime bounded by some polynomial in .

Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by . If the solver returns a candidate , for all , calculate . If the samples are from an LWE distribution, then the results of this calculation will be distributed according , but if the samples are uniformly random, these quantities will be distributed uniformly as well.

Solving search assuming decision

For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover one coordinate at a time. To obtain the first coordinate, , make a guess , and do the following. Choose a number uniformly at random. Transform the given samples as follows. Calculate . Send the transformed samples to the decision solver.

If the guess was correct, the transformation takes the distribution to itself, and otherwise, since is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since is bounded by some polynomial in , it only takes polynomial time to guess every possible value for and use the solver to see which one is correct.

After obtaining , we follow an analogous procedure for each other coordinate . Namely, we transform our samples the same way, and transform our samples by calculating , where the is in the coordinate. [3]

Peikert [4] showed that this reduction, with a small modification, works for any that is a product of distinct, small (polynomial in ) primes. The main idea is if , for each , guess and check to see if is congruent to , and then use the Chinese remainder theorem to recover .

Average case hardness

Regev [3] showed the random self-reducibility of the LWE and DLWE problems for arbitrary and . Given samples from , it is easy to see that are samples from .

So, suppose there was some set such that , and for distributions , with , DLWE was easy.

Then there would be some distinguisher , who, given samples , could tell whether they were uniformly random or from . If we need to distinguish uniformly random samples from , where is chosen uniformly at random from , we could simply try different values sampled uniformly at random from , calculate and feed these samples to . Since comprises a large fraction of , with high probability, if we choose a polynomial number of values for , we will find one such that , and will successfully distinguish the samples.

Thus, no such can exist, meaning LWE and DLWE are (up to a polynomial factor) as hard in the average case as they are in the worst case.

Hardness results

Regev's result

For a n-dimensional lattice , let smoothing parameter denote the smallest such that where is the dual of and is extended to sets by summing over function values at each element in the set. Let denote the discrete Gaussian distribution on of width for a lattice and real . The probability of each is proportional to .

The discrete Gaussian sampling problem(DGS) is defined as follows: An instance of is given by an -dimensional lattice and a number . The goal is to output a sample from . Regev shows that there is a reduction from to for any function .

Regev then shows that there exists an efficient quantum algorithm for given access to an oracle for for integer and such that . This implies the hardness for LWE. Although the proof of this assertion works for any , for creating a cryptosystem, the modulus has to be polynomial in .

Peikert's result

Peikert proves [4] that there is a probabilistic polynomial time reduction from the problem in the worst case to solving using samples for parameters , , and .

Use in cryptography

The LWE problem serves as a versatile problem used in construction of several [3] [4] [6] [7] cryptosystems. In 2005, Regev [3] showed that the decision version of LWE is hard assuming quantum hardness of the lattice problems (for as above) and with ). In 2009, Peikert [4] proved a similar result assuming only the classical hardness of the related problem . The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.

Public-key cryptosystem

Regev [3] proposed a public-key cryptosystem based on the hardness of the LWE problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by and a probability distribution on . The setting of the parameters used in proofs of correctness and security is

The cryptosystem is then defined by:

The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of LWE: an algorithm for distinguishing between encryptions (with above parameters) of and can be used to distinguish between and the uniform distribution over

CCA-secure cryptosystem

Peikert [4] proposed a system that is secure even against any chosen-ciphertext attack.

Key exchange

The idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper [8] appeared in 2012 after a provisional patent application was filed in 2012.

The security of the protocol is proven based on the hardness of solving the LWE problem. In 2014, Peikert presented a key-transport scheme [9] following the same basic idea of Ding's, where the new idea of sending an additional 1-bit signal for rounding in Ding's construction is also used. The "new hope" implementation [10] selected for Google's post-quantum experiment, [11] uses Peikert's scheme with variation in the error distribution.

Ring learning with errors signature (RLWE-SIG)

Main article: Ring learning with errors signature

A RLWE version of the classic Feige–Fiat–Shamir Identification protocol was created and converted to a digital signature in 2011 by Lyubashevsky. The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography – A Signature Scheme for Embedded Systems." These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems.

See also

Related Research Articles

<span class="mw-page-title-main">Elliptic curve</span> Algebraic curve

In mathematics, an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point O. An elliptic curve is defined over a field K and describes points in K2, the Cartesian product of K with itself. If the field's characteristic is different from 2 and 3, then the curve can be described as a plane algebraic curve which consists of solutions (x, y) for:

In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the best-known mathematical approximation in molecular dynamics. Specifically, it is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense ferment in the development of quantum mechanics.

In mechanics and geometry, the 3D rotation group, often denoted O(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.

In mathematics, the adele ring of a global field is a central object of class field theory, a branch of algebraic number theory. It is the restricted product of all the completions of the global field and is an example of a self-dual topological ring.

<span class="mw-page-title-main">Lattice (group)</span> Periodic set of points

In geometry and group theory, a lattice in the real coordinate space is an infinite set of points in this space with the properties that coordinate-wise addition or subtraction of two points in the lattice produces another lattice point, that the lattice points are all separated by some minimum distance, and that every point in the space is within some maximum distance of a lattice point. Closure under addition and subtraction means that a lattice must be a subgroup of the additive group of the points in the space, and the requirements of minimum and maximum distance can be summarized by saying that a lattice is a Delone set. More abstractly, a lattice can be described as a free abelian group of dimension which spans the vector space . For any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, and every lattice can be formed from a basis in this way. A lattice may be viewed as a regular tiling of a space by a primitive cell.

In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. It is a set of points in an n-dimensional space, often represented as an ellipsoid around a point which is an estimated solution to a problem, although other shapes can occur.

Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions support important standards of post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.

In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector space or free modules are generally considered.

In cryptography, SWIFFT is a collection of provably secure hash functions. It is based on the concept of the fast Fourier transform (FFT). SWIFFT is not the first hash function based on FFT, but it sets itself apart by providing a mathematical proof of its security. It also uses the LLL basis reduction algorithm. It can be shown that finding collisions in SWIFFT is at least as difficult as finding short vectors in cyclic/ideal lattices in the worst case. By giving a security reduction to the worst-case scenario of a difficult mathematical problem, SWIFFT gives a much stronger security guarantee than most other cryptographic hash functions.

Post-quantum cryptography (PQC), sometimes referred to as quantum-proof, quantum-safe or quantum-resistant, is the development of cryptographic algorithms that are thought to be secure against a cryptanalytic attack by a quantum computer. The problem with popular algorithms currently used in the market is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems could be easily solved on a sufficiently powerful quantum computer running Shor's algorithm or even faster and less demanding alternatives.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

In financial mathematics and stochastic optimization, the concept of risk measure is used to quantify the risk involved in a random outcome or risk position. Many risk measures have hitherto been proposed, each having certain characteristics. The entropic value at risk (EVaR) is a coherent risk measure introduced by Ahmadi-Javid, which is an upper bound for the value at risk (VaR) and the conditional value at risk (CVaR), obtained from the Chernoff inequality. The EVaR can also be represented by using the concept of relative entropy. Because of its connection with the VaR and the relative entropy, this risk measure is called "entropic value at risk". The EVaR was developed to tackle some computational inefficiencies of the CVaR. Getting inspiration from the dual representation of the EVaR, Ahmadi-Javid developed a wide class of coherent risk measures, called g-entropic risk measures. Both the CVaR and the EVaR are members of this class.

In post-quantum cryptography, ring learning with errors (RLWE) is a computational problem which serves as the foundation of new cryptographic algorithms, such as NewHope, designed to protect against cryptanalysis by quantum computers and also to provide the basis for homomorphic encryption. Public-key cryptography relies on construction of mathematical problems that are believed to be hard to solve if no further information is available, but are easy to solve if some information used in the problem construction is known. Some problems of this sort that are currently used in cryptography are at risk of attack if sufficiently large quantum computers can ever be built, so resistant problems are sought. Homomorphic encryption is a form of encryption that allows computation on ciphertext, such as arithmetic on numeric values stored in an encrypted database.

In cryptography, a public key exchange algorithm is a cryptographic algorithm which allows two parties to create and share a secret key, which they can use to encrypt messages between themselves. The ring learning with errors key exchange (RLWE-KEX) is one of a new class of public key exchange algorithms that are designed to be secure against an adversary that possesses a quantum computer. This is important because some public key algorithms in use today will be easily broken by a quantum computer if such computers are implemented. RLWE-KEX is one of a set of post-quantum cryptographic algorithms which are based on the difficulty of solving certain mathematical problems involving lattices. Unlike older lattice based cryptographic algorithms, the RLWE-KEX is provably reducible to a known hard problem in lattices.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let k be the base field, f be an automorphic form over k, π be the representation associated via the Jacquet–Langlands correspondence with f. Goro Shimura (1976) proved this formula, when and f is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and f is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

HEAAN is an open source homomorphic encryption (HE) library which implements an approximate HE scheme proposed by Cheon, Kim, Kim and Song (CKKS). The first version of HEAAN was published on GitHub on 15 May 2016, and later a new version of HEAAN with a bootstrapping algorithm was released. Currently, the latest version is Version 2.1.

Oded Regev is an Israeli-American theoretical computer scientist and mathematician. He is a professor of computer science at the Courant institute at New York University. He is best known for his work in lattice-based cryptography, and in particular for introducing the learning with errors problem.

References

  1. 1 2 Regev, Oded (2009). "On lattices, learning with errors, random linear codes, and cryptography". Journal of the ACM. 56 (6): 1–40. arXiv: 2401.03703 . doi:10.1145/1568318.1568324. S2CID   207156623.
  2. Lyubashevsky, Vadim; Peikert, Chris; Regev, Oded (November 2013). "On Ideal Lattices and Learning with Errors over Rings". Journal of the ACM. 60 (6): 1–35. doi:10.1145/2535925. ISSN   0004-5411. S2CID   1606347.
  3. 1 2 3 4 5 6 7 8 Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84–93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
  4. 1 2 3 4 5 6 Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333–342, http://portal.acm.org/citation.cfm?id=1536414.1536461.
  5. Peikert, Chris (2014-10-01). "Lattice Cryptography for the Internet". In Mosca, Michele (ed.). Post-Quantum Cryptography. Lecture Notes in Computer Science. Vol. 8772. Springer International Publishing. pp. 197–219. CiteSeerX   10.1.1.800.4743 . doi:10.1007/978-3-319-11659-4_12. ISBN   978-3-319-11658-7. S2CID   8123895.
  6. Chris Peikert and Brent Waters, “Lossy trapdoor functions and their applications,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 187-196, http://portal.acm.org/citation.cfm?id=1374406.
  7. Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan, “Trapdoors for hard lattices and new cryptographic constructions,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 197-206, http://portal.acm.org/citation.cfm?id=1374407.
  8. Lin, Jintai Ding, Xiang Xie, Xiaodong (2012-01-01). "A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem". Cryptology ePrint Archive.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  9. Peikert, Chris (2014-01-01). "Lattice Cryptography for the Internet". Cryptology ePrint Archive.
  10. Alkim, Erdem; Ducas, Léo; Pöppelmann, Thomas; Schwabe, Peter (2015-01-01). "Post-quantum key exchange - a new hope". Cryptology ePrint Archive.
  11. "Experimenting with Post-Quantum Cryptography". Google Online Security Blog. Retrieved 2017-02-08.