In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices . The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector spaces (often ) or free modules (often ) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norm usually considered is the Euclidean norm L2. However, other norms (such as Lp) are also considered and show up in a variety of results. [1]
Throughout this article, let denote the length of the shortest non-zero vector in the lattice L: that is,
In the SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that .
In the γ-approximation version SVPγ, one must find a non-zero lattice vector of length at most for given .
The exact version of the problem is only known to be NP-hard for randomized reductions. [2] [3] By contrast, the corresponding problem with respect to the uniform norm is known to be NP-hard. [4]
To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time () and memory, and algorithms requiring both exponential time and space () in the lattice dimension. The former class of algorithms most notably includes lattice enumeration [5] [6] [7] and random sampling reduction, [8] [9] while the latter includes lattice sieving, [10] [11] [12] computing the Voronoi cell of the lattice, [13] [14] and discrete Gaussian sampling. [15] An open problem is whether algorithms for solving exact SVP exist running in single exponential time () and requiring memory scaling polynomially in the lattice dimension. [16]
To solve the γ-approximation version SVPγ for for the Euclidean norm, the best known approaches are based on using lattice basis reduction. For large , the Lenstra–Lenstra–Lovász (LLL) algorithm can find a solution in time polynomial in the lattice dimension. For smaller values , the Block Korkine-Zolotarev algorithm (BKZ) [17] [18] [19] is commonly used, where the input to the algorithm (the blocksize ) determines the time complexity and output quality: for large approximation factors , a small block size suffices, and the algorithm terminates quickly. For small , larger are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most ), and its overall complexity is closely related to the costs of these SVP calls in dimension .
The problem GapSVPβ consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most or larger than , where can be a fixed function of the dimension of the lattice . Given a basis for the lattice, the algorithm must decide whether or . Like other promise problems, the algorithm is allowed to err on all other cases.
Yet another version of the problem is GapSVPζ,γ for some functions ζ and γ. The input to the algorithm is a basis and a number . It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that and that , where is the dimension. The algorithm must accept if , and reject if . For large (i.e. ), the problem is equivalent to GapSVPγ because [20] a preprocessing done using the LLL algorithm makes the second condition (and hence, ) redundant.
In CVP, a basis of a vector space V and a metric M (often L2) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the -approximation version CVPγ, one must find a lattice vector at distance at most .
The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for CVPγ (defined below), one can solve SVPγ by making some queries to the oracle. [21] The naive method to find the shortest vector by calling the CVPγ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from SVPγ to CVPγ is as follows: Suppose that the input to the SVPγ is the basis for lattice . Consider the basis and let be the vector returned by CVPγ(Bi, bi). The claim is that the shortest vector in the set is the shortest vector in the given lattice.
Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP. [22] Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor unless . [23] Dinur et al. strengthened this by giving a NP-hardness result with for . [24]
Algorithms for CVP, especially the Fincke and Pohst variant, [6] have been used for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals). [25] [13] In this context it is called sphere decoding due to the radius used internal to many CVP solutions. [26]
It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS). [27] It is called the LAMBDA method in that field. In the same field, the general CVP problem is referred to as Integer Least Squares.
This problem is similar to the GapSVP problem. For GapSVPβ, the input consists of a lattice basis and a vector , and the algorithm must answer whether one of the following holds:
The opposite condition is that the closest lattice vector is at a distance , hence the name GapCVP.
The problem is trivially contained in NP for any approximation factor.
Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for . [28] Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of . [10]
In 1993, Banaszczyk showed that GapCVPn is in . [29] In 2000, Goldreich and Goldwasser showed that puts the problem in both NP and coAM. [30] In 2005, Aharonov and Regev showed that for some constant , the problem with is in . [31]
For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for . [32]
Given a lattice L of dimension n, the algorithm must output n linearly independent so that , where the right-hand side considers all bases of the lattice.
In the -approximate version, given a lattice L with dimension n, one must find n linearly independent vectors of length , where is the th successive minimum of .
This problem is similar to CVP. Given a vector such that its distance from the lattice is at most , the algorithm must output the closest lattice vector to it.
Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.
Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basis , output an equivalent basis such that the length of the longest vector in is as short as possible.
The approximation version SBPγ problem consist of finding a basis whose longest vector is at most times longer than the longest vector in the shortest basis.
Average-case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers.
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time. [33] This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice; however, this belief was challenged in the late 1990s, when several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai. [2]
In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. [2] [3] Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, [34] thus making it the first result to have used worst-case hardness to create secure systems. [35]
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
The Phong reflection model is an empirical model of the local illumination of points on a surface designed by the computer graphics researcher Bui Tuong Phong. In 3D computer graphics, it is sometimes referred to as "Phong shading", particularly if the model is used with the interpolation method of the same name and in the context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.
Geometry of numbers is the part of number theory which uses geometry for the study of algebraic numbers. Typically, a ring of algebraic integers is viewed as a lattice in and the study of these lattices provides fundamental information on algebraic numbers. The geometry of numbers was initiated by Hermann Minkowski.
The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.
In mathematics and theoretical physics, the term quantum group denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups, compact matrix quantum groups, and bicrossproduct quantum groups. Despite their name, they do not themselves have a natural group structure, though they are in some sense 'close' to a group.
In geometry and group theory, a lattice in the real coordinate space is an infinite set of points in this space with the properties that coordinate-wise addition or subtraction of two points in the lattice produces another lattice point, that the lattice points are all separated by some minimum distance, and that every point in the space is within some maximum distance of a lattice point. Closure under addition and subtraction means that a lattice must be a subgroup of the additive group of the points in the space, and the requirements of minimum and maximum distance can be summarized by saying that a lattice is a Delone set. More abstractly, a lattice can be described as a free abelian group of dimension which spans the vector space . For any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, and every lattice can be formed from a basis in this way. A lattice may be viewed as a regular tiling of a space by a primitive cell.
In mathematical logic and type theory, the λ-cube is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:
In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
In the Newman–Penrose (NP) formalism of general relativity, Weyl scalars refer to a set of five complex scalars which encode the ten independent components of the Weyl tensor of a four-dimensional spacetime.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions support important standards of post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.
In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven Ricci scalars which consist of three real scalars , three complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation.
In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics.
Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.