Lattice problem

Last updated

In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices . The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector spaces (often ) or free modules (often ) are generally considered.

Contents

For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norm usually considered is the Euclidean norm L2. However, other norms (such as Lp) are also considered and show up in a variety of results. [1]

Throughout this article, let denote the length of the shortest non-zero vector in the lattice L: that is,

Shortest vector problem (SVP)

This is an illustration of the shortest vector problem (basis vectors in blue, shortest vector in red). SVP.svg
This is an illustration of the shortest vector problem (basis vectors in blue, shortest vector in red).

In the SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that .

In the γ-approximation version SVPγ, one must find a non-zero lattice vector of length at most for given .

Hardness results

The exact version of the problem is only known to be NP-hard for randomized reductions. [2] [3] By contrast, the corresponding problem with respect to the uniform norm is known to be NP-hard. [4]

Algorithms for the Euclidean norm

To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time () and memory, and algorithms requiring both exponential time and space () in the lattice dimension. The former class of algorithms most notably includes lattice enumeration [5] [6] [7] and random sampling reduction, [8] [9] while the latter includes lattice sieving, [10] [11] [12] computing the Voronoi cell of the lattice, [13] [14] and discrete Gaussian sampling. [15] An open problem is whether algorithms for solving exact SVP exist running in single exponential time () and requiring memory scaling polynomially in the lattice dimension. [16]

To solve the γ-approximation version SVPγ for for the Euclidean norm, the best known approaches are based on using lattice basis reduction. For large , the Lenstra–Lenstra–Lovász (LLL) algorithm can find a solution in time polynomial in the lattice dimension. For smaller values , the Block Korkine-Zolotarev algorithm (BKZ) [17] [18] [19] is commonly used, where the input to the algorithm (the blocksize ) determines the time complexity and output quality: for large approximation factors , a small block size suffices, and the algorithm terminates quickly. For small , larger are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most ), and its overall complexity is closely related to the costs of these SVP calls in dimension .

GapSVP

The problem GapSVPβ consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most or larger than , where can be a fixed function of the dimension of the lattice . Given a basis for the lattice, the algorithm must decide whether or . Like other promise problems, the algorithm is allowed to err on all other cases.

Yet another version of the problem is GapSVPζ,γ for some functions ζ and γ. The input to the algorithm is a basis and a number . It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that and that , where is the dimension. The algorithm must accept if , and reject if . For large (i.e. ), the problem is equivalent to GapSVPγ because [20] a preprocessing done using the LLL algorithm makes the second condition (and hence, ) redundant.

Closest vector problem (CVP)

This is an illustration of the closest vector problem (basis vectors in blue, external vector in green, closest vector in red). CVP.svg
This is an illustration of the closest vector problem (basis vectors in blue, external vector in green, closest vector in red).

In CVP, a basis of a vector space V and a metric M (often L2) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the -approximation version CVPγ, one must find a lattice vector at distance at most .

Relationship with SVP

The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for CVPγ (defined below), one can solve SVPγ by making some queries to the oracle. [21] The naive method to find the shortest vector by calling the CVPγ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.

The reduction from SVPγ to CVPγ is as follows: Suppose that the input to the SVPγ is the basis for lattice . Consider the basis and let be the vector returned by CVPγ(Bi, bi). The claim is that the shortest vector in the set is the shortest vector in the given lattice.

Hardness results

Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP. [22] Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor unless . [23] Dinur et al. strengthened this by giving a NP-hardness result with for . [24]

Sphere decoding

Algorithms for CVP, especially the Fincke and Pohst variant, [6] have been used for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals). [25] [13] In this context it is called sphere decoding due to the radius used internal to many CVP solutions. [26]

It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS). [27] It is called the LAMBDA method in that field. In the same field, the general CVP problem is referred to as Integer Least Squares.

GapCVP

This problem is similar to the GapSVP problem. For GapSVPβ, the input consists of a lattice basis and a vector , and the algorithm must answer whether one of the following holds:

The opposite condition is that the closest lattice vector is at a distance , hence the name GapCVP.

Known results

The problem is trivially contained in NP for any approximation factor.

Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for . [28] Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of . [10]

In 1993, Banaszczyk showed that GapCVPn is in . [29] In 2000, Goldreich and Goldwasser showed that puts the problem in both NP and coAM. [30] In 2005, Aharonov and Regev showed that for some constant , the problem with is in . [31]

For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for . [32]

Shortest independent vectors problem (SIVP)

Given a lattice L of dimension n, the algorithm must output n linearly independent so that , where the right-hand side considers all bases of the lattice.

In the -approximate version, given a lattice L with dimension n, one must find n linearly independent vectors of length , where is the th successive minimum of .

Bounded distance decoding

This problem is similar to CVP. Given a vector such that its distance from the lattice is at most , the algorithm must output the closest lattice vector to it.

Covering radius problem

Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.

Shortest basis problem

Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basis , output an equivalent basis such that the length of the longest vector in is as short as possible.

The approximation version SBPγ problem consist of finding a basis whose longest vector is at most times longer than the longest vector in the shortest basis.

Use in cryptography

Average-case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers.

The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time. [33] This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice; however, this belief was challenged in the late 1990s, when several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai. [2]

In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. [2] [3] Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, [34] thus making it the first result to have used worst-case hardness to create secure systems. [35]

See also

Related Research Articles

<span class="mw-page-title-main">Lorentz transformation</span> Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

The Phong reflection model is an empirical model of the local illumination of points on a surface designed by the computer graphics researcher Bui Tuong Phong. In 3D computer graphics, it is sometimes referred to as "Phong shading", particularly if the model is used with the interpolation method of the same name and in the context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.

<span class="mw-page-title-main">Weibull distribution</span> Continuous probability distribution

In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.

<span class="mw-page-title-main">Geometry of numbers</span>

Geometry of numbers is the part of number theory which uses geometry for the study of algebraic numbers. Typically, a ring of algebraic integers is viewed as a lattice in and the study of these lattices provides fundamental information on algebraic numbers. The geometry of numbers was initiated by Hermann Minkowski.

The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.

<span class="mw-page-title-main">Quantum group</span> Algebraic construct of interest in theoretical physics

In mathematics and theoretical physics, the term quantum group denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups, compact matrix quantum groups, and bicrossproduct quantum groups. Despite their name, they do not themselves have a natural group structure, though they are in some sense 'close' to a group.

<span class="mw-page-title-main">Lattice (group)</span> Periodic set of points

In geometry and group theory, a lattice in the real coordinate space is an infinite set of points in this space with the properties that coordinate-wise addition or subtraction of two points in the lattice produces another lattice point, that the lattice points are all separated by some minimum distance, and that every point in the space is within some maximum distance of a lattice point. Closure under addition and subtraction means that a lattice must be a subgroup of the additive group of the points in the space, and the requirements of minimum and maximum distance can be summarized by saying that a lattice is a Delone set. More abstractly, a lattice can be described as a free abelian group of dimension which spans the vector space . For any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, and every lattice can be formed from a basis in this way. A lattice may be viewed as a regular tiling of a space by a primitive cell.

<span class="mw-page-title-main">Lambda cube</span>

In mathematical logic and type theory, the λ-cube is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:

<span class="mw-page-title-main">Lattice reduction</span>

In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.

In the Newman–Penrose (NP) formalism of general relativity, Weyl scalars refer to a set of five complex scalars which encode the ten independent components of the Weyl tensor of a four-dimensional spacetime.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions support important standards of post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven Ricci scalars which consist of three real scalars , three complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation.

<span class="mw-page-title-main">Relativistic angular momentum</span> Angular momentum in special and general relativity

In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

References

  1. Khot, Subhash (2005). "Hardness of approximating the shortest vector problem in lattices". J. ACM. 52 (5): 789–808. doi:10.1145/1089023.1089027. S2CID   13438130.
  2. 1 2 3 Ajtai, M. (1996). "Generating hard instances of lattice problems". Proceedings of the Twenty-Eighth annual ACM symposium on Theory of computing. Philadelphia, Pennsylvania, United States: ACM. pp. 99–108. doi:10.1145/237814.237838. ISBN   978-0-89791-785-8. S2CID   6864824.
  3. 1 2 Ajtai, Miklós (1998). "The shortest vector problem in L2 is NP-hard for randomized reductions". Proceedings of the thirtieth annual ACM symposium on Theory of computing. Dallas, Texas, United States: ACM. pp. 10–19. doi:10.1145/276698.276705. ISBN   978-0-89791-962-3. S2CID   4503998.
  4. van Emde Boas, Peter (1981). "Another NP-complete problem and the complexity of computing short vectors in a lattice". Technical Report 8104. University of Amsterdam, Department of Mathematics, Netherlands.
  5. Kannan, Ravi (1983). "Improved algorithms for integer programming and related lattice problems". Proceedings of the fifteenth annual ACM symposium on Theory of computing - STOC '83. New York, NY, USA: ACM. pp. 193–206. doi:10.1145/800061.808749. ISBN   978-0-89791-099-6. S2CID   18181112.
  6. 1 2 Fincke, U.; Pohst, M. (1985). "Improved Methods for Calculating Vectors of Short Length in a Lattice, Including a Complexity Analysis". Math. Comp. 44 (170): 463–471. doi: 10.1090/S0025-5718-1985-0777278-8 .
  7. Gama, Nicolas; Nguyen, Phong Q.; Regev, Oded (2010-05-30). "Lattice Enumeration Using Extreme Pruning". Advances in Cryptology – EUROCRYPT 2010. Lecture Notes in Computer Science. Vol. 6110. Springer, Berlin, Heidelberg. pp. 257–278. doi:10.1007/978-3-642-13190-5_13. ISBN   978-3-642-13189-9. S2CID   1938519.
  8. Schnorr, Claus Peter (2003-02-27). "Lattice Reduction by Random Sampling and Birthday Methods". Stacs 2003. Lecture Notes in Computer Science. Vol. 2607. Springer, Berlin, Heidelberg. pp. 145–156. CiteSeerX   10.1.1.137.4293 . doi:10.1007/3-540-36494-3_14. ISBN   978-3-540-36494-8.
  9. Aono, Yoshinori; Nguyen, Phong Q. (2017-04-30). "Random Sampling Revisited: Lattice Enumeration with Discrete Pruning". Advances in Cryptology – EUROCRYPT 2017 (PDF). Lecture Notes in Computer Science. Vol. 10211. Springer, Cham. pp. 65–102. doi:10.1007/978-3-319-56614-6_3. ISBN   978-3-319-56613-9. S2CID   39082279.
  10. 1 2 Ajtai, Miklós; Kumar, Ravi; Sivakumar, D. (2001). "A sieve algorithm for the shortest lattice vector problem". Proceedings of the thirty-third annual ACM symposium on Theory of computing. Hersonissos, Greece: ACM. pp. 601–610. doi:10.1145/380752.380857. ISBN   1-58113-349-9. S2CID   14982298.
  11. Micciancio, Daniele; Voulgaris, Panagiotis (2010). "Faster Exponential Time Algorithms for the Shortest Vector Problem". Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms. SODA '10. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics. pp. 1468–1480. doi: 10.1137/1.9781611973075.119 . ISBN   978-0-89871-698-6. S2CID   90084.
  12. Becker, A.; Ducas, L.; Gama, N.; Laarhoven, T. (2015-12-21). "New directions in nearest neighbor searching with applications to lattice sieving". Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics. pp. 10–24. doi:10.1137/1.9781611974331.ch2. ISBN   978-1-61197-433-1.
  13. 1 2 Agrell, E.; Eriksson, T.; Vardy, A.; Zeger, K. (2002). "Closest Point Search in Lattices" (PDF). IEEE Trans. Inf. Theory. 48 (8): 2201–2214. doi:10.1109/TIT.2002.800499.
  14. Micciancio, Daniele; Voulgaris, Panagiotis (2010). "A deterministic single exponential time algorithm for most lattice problems based on voronoi cell computations". Proceedings of the forty-second ACM symposium on Theory of computing. STOC '10. New York, NY, USA: ACM. pp. 351–358. CiteSeerX   10.1.1.705.3304 . doi:10.1145/1806689.1806739. ISBN   978-1-4503-0050-6. S2CID   2449948.
  15. Aggarwal, Divesh; Dadush, Daniel; Regev, Oded; Stephens-Davidowitz, Noah (2015). "Solving the Shortest Vector Problem in 2 n Time Using Discrete Gaussian Sampling". Proceedings of the forty-seventh annual ACM symposium on Theory of Computing. STOC '15. New York, NY, USA: ACM. pp. 733–742. doi:10.1145/2746539.2746606. ISBN   978-1-4503-3536-2. S2CID   10214330.
  16. Micciancio, Daniele (2017-07-01). "Lattice Cryptography – Shortest Vector Problem".
  17. Schnorr, C. P. (1987-01-01). "A hierarchy of polynomial time lattice basis reduction algorithms". Theoretical Computer Science. 53 (2): 201–224. doi: 10.1016/0304-3975(87)90064-8 .
  18. Schnorr, C. P.; Euchner, M. (1994-08-01). "Lattice basis reduction: Improved practical algorithms and solving subset sum problems" (PDF). Mathematical Programming. 66 (1–3): 181–199. doi:10.1007/bf01581144. ISSN   0025-5610. S2CID   15386054.
  19. Chen, Yuanmi; Nguyen, Phong Q. (2011-12-04). "BKZ 2.0: Better Lattice Security Estimates". Advances in Cryptology – ASIACRYPT 2011. Lecture Notes in Computer Science. Vol. 7073. Springer, Berlin, Heidelberg. pp. 1–20. doi:10.1007/978-3-642-25385-0_1. ISBN   978-3-642-25384-3.
  20. Peikert, Chris (2009). "Public-key cryptosystems from the worst-case shortest vector problem: extended abstract". Proceedings of the 41st annual ACM symposium on Theory of Computing. Bethesda, MD, USA: ACM. pp. 333–342. doi:10.1145/1536414.1536461. ISBN   978-1-60558-506-2. S2CID   1864880.
  21. Micciancio, Daniele; Goldwasser, Shafi (2002). Complexity of Lattice Problems. Springer.
  22. Goldreich, O.; et al. (1999). "Approximating shortest lattice vectors is not harder than approximating closest lattice vectors". Inf. Process. Lett. 71 (2): 55–61. doi:10.1016/S0020-0190(99)00083-6.
  23. Arora, Sanjeev; et al. (1993). "Proceedings of 1993 IEEE 34th Annual Foundations of Computer Science". J. Comput. Syst. Sci. Vol. 54. pp. 317–331. doi:10.1109/SFCS.1993.366815. ISBN   978-0-8186-4370-5. S2CID   44988406.
  24. Dinur, I.; et al. (2003). "Approximating CVP to Within Almost-Polynomial Factors is NP-Hard". Combinatorica. 23 (2): 205–243. doi:10.1007/s00493-003-0019-y. S2CID   45754954.
  25. Biglieri, E.; Calderbank, R.; Constantinides, Anthony G.; Goldsmith, A.; Paulraj, A.; Poor, H. V. (2007). MIMO Wireless Communications. Cambridge: Cambridge U. P.
  26. Wang, Ping; Le-Ngoc, Tho (2011). "A List Sphere Decoding Algorithm with Improved Radius Setting Strategies". Wireless Personal Communications. 61 (1): 189–200. doi:10.1007/s11277-010-0018-4. S2CID   30919872.
  27. Hassibi, A.; Boyd, S. (1998). "Integer Parameter Estimation in Linear Models with Applications to GPS". IEEE Trans. Sig. Proc. 46 (11): 2938–2952. Bibcode:1998ITSP...46.2938H. CiteSeerX   10.1.1.114.7246 . doi:10.1109/78.726808.
  28. Schnorr, C. P. "Factoring integers and computing discrete logarithms via diophantine approximation". Advances in Cryptology – Proceedings of Eurocrypt '91.
  29. Banaszczyk, W. (1993). "New bounds in some transference theorems in the geometry of numbers". Math. Ann. 296 (1): 625–635. doi:10.1007/BF01445125. S2CID   13921988.
  30. Goldreich, Oded; Goldwasser, Shafi (1998). "On the limits of non-approximability of lattice problems". Proceedings of the thirtieth annual ACM symposium on Theory of computing. Dallas, Texas, United States: ACM. pp. 1–9. doi:10.1145/276698.276704. ISBN   0-89791-962-9. S2CID   3051993.
  31. Aharonov, Dorit; Oded Regev (2005). "Lattice problems in NP coNP". J. ACM. 52 (5): 749–765. CiteSeerX   10.1.1.205.3730 . doi:10.1145/1089023.1089025. S2CID   1669286.
  32. Dinur, I.; Kindler, G.; Safra, S. (1998). "Approximating-CVP to within Almost-Polynomial Factors is NP-Hard". Proceedings of the 39th Annual Symposium on Foundations of Computer Science. IEEE Computer Society. p. 99. ISBN   978-0-8186-9172-0.
  33. Lenstra, A. K.; Lenstra, H. W. Jr.; Lovász, L. (1982). "Factoring polynomials with rational coefficients" (PDF). Math. Ann. 261 (4): 515–534. doi:10.1007/BF01457454. S2CID   5701340. Archived from the original (PDF) on 2011-07-17.
  34. Ajtai, Miklós; Dwork, Cynthia (1997). "A public-key cryptosystem with worst-case/average-case equivalence". Proceedings of the Twenty-Ninth annual ACM symposium on Theory of computing. El Paso, Texas, United States: ACM. pp. 284–293. doi:10.1145/258533.258604. ISBN   0-89791-888-6. S2CID   9918417.
  35. Cai, Jin-Yi (2000). "The Complexity of Some Lattice Problems". Algorithmic Number Theory. Lecture Notes in Computer Science. Vol. 1838. pp. 1–32. doi:10.1007/10722028_1. ISBN   978-3-540-67695-9.

Further reading