Lattice reduction

Last updated
Lattice reduction in two dimensions: the black vectors are the given basis for the lattice (represented by blue dots), the red vectors are the reduced basis Lattice-reduction.svg
Lattice reduction in two dimensions: the black vectors are the given basis for the lattice (represented by blue dots), the red vectors are the reduced basis

In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.

Contents

Nearly orthogonal

One measure of nearly orthogonal is the orthogonality defect. This compares the product of the lengths of the basis vectors with the volume of the parallelepiped they define. For perfectly orthogonal basis vectors, these quantities would be the same.

Any particular basis of vectors may be represented by a matrix , whose columns are the basis vectors . In the fully dimensional case where the number of basis vectors is equal to the dimension of the space they occupy, this matrix is square, and the volume of the fundamental parallelepiped is simply the absolute value of the determinant of this matrix . If the number of vectors is less than the dimension of the underlying space, then volume is . For a given lattice , this volume is the same (up to sign) for any basis, and hence is referred to as the determinant of the lattice or lattice constant.

The orthogonality defect is the product of the basis vector lengths divided by the parallelepiped volume;

From the geometric definition it may be appreciated that with equality if and only if the basis is orthogonal.

If the lattice reduction problem is defined as finding the basis with the smallest possible defect, then the problem is NP-hard [ citation needed ]. However, there exist polynomial time algorithms to find a basis with defect where c is some constant depending only on the number of basis vectors and the dimension of the underlying space (if different)[ citation needed ]. This is a good enough solution in many practical applications[ citation needed ].

In two dimensions

For a basis consisting of just two vectors, there is a simple and efficient method of reduction closely analogous to the Euclidean algorithm for the greatest common divisor of two integers. As with the Euclidean algorithm, the method is iterative; at each step the larger of the two vectors is reduced by adding or subtracting an integer multiple of the smaller vector.

The pseudocode of the algorithm, often known as Lagrange's algorithm or the Lagrange-Gauss algorithm, is as follows:

    Input:  a basis for the lattice . Assume that , otherwise swap them.     Output: A basis  with .
    While :           # Round to nearest integer          


See the section on Lagrange's algorithm in [1] for further details.

Applications

Lattice reduction algorithms are used in a number of modern number theoretical applications, including in the discovery of a spigot algorithm for . Although determining the shortest basis is possibly an NP-complete problem, algorithms such as the LLL algorithm [2] can find a short (not necessarily shortest) basis in polynomial time with guaranteed worst-case performance. LLL is widely used in the cryptanalysis of public key cryptosystems.

When used to find integer relations, a typical input to the algorithm consists of an augmented identity matrix with the entries in the last column consisting of the elements (multiplied by a large positive constant to penalize vectors that do not sum to zero) between which the relation is sought.

The LLL algorithm for computing a nearly-orthogonal basis was used to show that integer programming in any fixed dimension can be done in polynomial time. [3]

Algorithms

The following algorithms reduce lattice bases; several public implementations of these algorithms are also listed.

YearAlgorithmImplementation
1773Lagrange/Gauss reduction for 2D lattices
1982 Lenstra–Lenstra–Lovász reduction NTL, fplll
1987Block Korkine–Zolotarev [4] NTL, fplll
1993Seysen Reduction [5]

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Basis (linear algebra)</span> Set of vectors used to define coordinates

In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors.

<span class="mw-page-title-main">Minkowski's theorem</span> Every symmetric convex set in R^n with volume > 2^n contains a non-zero integer point

In mathematics, Minkowski's theorem is the statement that every convex set in which is symmetric with respect to the origin and which has volume greater than contains a non-zero integer point. The theorem was proved by Hermann Minkowski in 1889 and became the foundation of the branch of number theory called the geometry of numbers. It can be extended from the integers to any lattice and to any symmetric convex set with volume greater than , where denotes the covolume of the lattice.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

<span class="mw-page-title-main">Lattice (group)</span> Periodic set of points

In geometry and group theory, a lattice in the real coordinate space is an infinite set of points in this space with the properties that coordinate-wise addition or subtraction of two points in the lattice produces another lattice point, that the lattice points are all separated by some minimum distance, and that every point in the space is within some maximum distance of a lattice point. Closure under addition and subtraction means that a lattice must be a subgroup of the additive group of the points in the space, and the requirements of minimum and maximum distance can be summarized by saying that a lattice is a Delone set. More abstractly, a lattice can be described as a free abelian group of dimension which spans the vector space . For any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, and every lattice can be formed from a basis in this way. A lattice may be viewed as a regular tiling of a space by a primitive cell.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

<span class="mw-page-title-main">Dual lattice</span>

In the theory of lattices, the dual lattice is a construction analogous to that of a dual vector space. In certain respects, the geometry of the dual lattice of a lattice is the reciprocal of the geometry of , a perspective which underlies many of its uses.

The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis with n-dimensional integer coordinates, for a lattice L with , the LLL algorithm calculates an LLL-reduced lattice basis in time

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector space or free modules are generally considered.

The Korkine–Zolotarev (KZ) lattice basis reduction algorithm or Hermite–Korkine–Zolotarev (HKZ) algorithm is a lattice reduction algorithm.

References

  1. Nguyen, Phong Q. (2009). "Hermite's Constant and Lattice Algorithms". The LLL Algorithm. Information Security and Cryptography. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 19–69. doi:10.1007/978-3-642-02295-1_2. ISBN   978-3-642-02294-4. ISSN   1619-7100.
  2. Lenstra, A. K.; Lenstra, H. W. Jr.; Lovász, L. (1982). "Factoring polynomials with rational coefficients". Mathematische Annalen . 261 (4): 515–534. CiteSeerX   10.1.1.310.318 . doi:10.1007/BF01457454. hdl:1887/3810. MR   0682664. S2CID   5701340.
  3. Lenstra, Jr., H. W. (1983). "Integer programming with a fixed number of variables". Mathematics of Operations Research . 8 (4): 538–548. CiteSeerX   10.1.1.431.5444 . doi:10.1287/moor.8.4.538.
  4. Hanrot, Guillaume; Stehlé, Damien (2008). "Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases". arXiv: 0801.3331 [math.NT].
  5. Seysen, Martin (September 1993). "Simultaneous reduction of a lattice basis and its reciprocal basis". Combinatorica. 13 (3): 363–376. doi:10.1007/BF01202355. S2CID   206791637.