Dual lattice

Last updated

In the theory of lattices, the dual lattice is a construction analogous to that of a dual vector space. In certain respects, the geometry of the dual lattice of a lattice is the reciprocal of the geometry of , a perspective which underlies many of its uses.

Contents

Dual lattices have many applications inside of lattice theory, theoretical computer science, cryptography and mathematics more broadly. For instance, it is used in the statement of the Poisson summation formula, transference theorems provide connections between the geometry of a lattice and that of its dual, and many lattice algorithms exploit the dual lattice.

For an article with emphasis on the physics / chemistry applications, see Reciprocal lattice. This article focuses on the mathematical notion of a dual lattice.

Definition

Let be a lattice. That is, for some matrix .

The dual lattice is the set of linear functionals on which take integer values on each point of :

If is identified with using the dot-product, we can write It is important to restrict to vectors in the span of , otherwise the resulting object is not a lattice.

Despite this identification of ambient Euclidean spaces, it should be emphasized that a lattice and its dual are fundamentally different kinds of objects; one consists of vectors in Euclidean space, and the other consists of a set of linear functionals on that space. Along these lines, one can also give a more abstract definition as follows:

However, we note that the dual is not considered just as an abstract Abelian group of functionals, but comes with a natural inner product: , where is an orthonormal basis of . (Equivalently, one can declare that, for an orthonormal basis of , the dual vectors , defined by are an orthonormal basis.) One of the key uses of duality in lattice theory is the relationship of the geometry of the primal lattice with the geometry of its dual, for which we need this inner product. In the concrete description given above, the inner product on the dual is generally implicit.

Properties

We list some elementary properties of the dual lattice:

Examples

Using the properties listed above, the dual of a lattice can be efficiently calculated, by hand or computer. Certain lattices with importance in mathematics and computer science are dual to each other, and we list some here.

Elementary examples

q-ary lattices

An important class of examples, particularly in lattice cryptography, are given by the q-ary lattices. For a matrix we define ; these are called, respectively, the image and kernel q-ary lattices associated to . Then, after identifying Euclidean space with its dual, we have that the image and kernel q-ary lattices of a matrix are dual, up to a scalar. In particular, and .[ citation needed ] (The proof can be done as an exercise.)

Transference theorems

Each partitions according to the level sets corresponding to each of the integer values. Smaller choices of produce level sets with more distance between them; in particular, the distance between the layers is . Reasoning this way, one can show that finding small vectors in provides a lower bound on the largest size of non-overlapping spheres that can be placed around points of . In general, theorems relating the properties of a lattice with properties of its dual are known as transference theorems. In this section we explain some of them, along with some consequences for complexity theory.

We recall some terminology: For a lattice , let denote the smallest radius ball that contains a set of linearly independent vectors of . For instance, is the length of the shortest vector of . Let denote the covering radius of .

In this notation, the lower bound mentioned in the introduction to this section states that .

Theorem (Banaszczyk) [1]   For a lattice :

There always an efficiently checkable certificate for the claim that a lattice has a short nonzero vector, namely the vector itself. An important corollary of Banaszcyk's transference theorem is that , which implies that to prove that a lattice has no short vectors, one can show a basis for the dual lattice consisting of short vectors. Using these ideas one can show that approximating the shortest vector of a lattice to within a factor of n (the problem) is in .[ citation needed ]

Other transference theorems:

Poisson summation formula

The dual lattice is used in the statement of a general Poisson summation formula.

Theorem  Theorem (Poisson Summation) [2] Let be a well-behaved function, such as a Schwartz function, and let denote its Fourier transform. Let be a full rank lattice. Then:

.


Further reading

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Minkowski's theorem</span> Every symmetric convex set in R^n with volume > 2^n contains a non-zero integer point

In mathematics, Minkowski's theorem is the statement that every convex set in which is symmetric with respect to the origin and which has volume greater than contains a non-zero integer point. The theorem was proved by Hermann Minkowski in 1889 and became the foundation of the branch of number theory called the geometry of numbers. It can be extended from the integers to any lattice and to any symmetric convex set with volume greater than , where denotes the covolume of the lattice.

In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

In mathematics, a modular form is a (complex) analytic function on the upper half-plane, , that satisfies:

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

<span class="mw-page-title-main">Reciprocal lattice</span> Fourier transform of a real-space lattice, important in solid-state physics

In physics, the reciprocal lattice emerges from the Fourier transform of another lattice. The direct lattice or real lattice is a periodic function in physical space, such as a crystal system. The reciprocal lattice exists in the mathematical space of spatial frequencies, known as reciprocal space or k space, where refers to the wavevector.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

<span class="mw-page-title-main">Complex torus</span>

In mathematics, a complex torus is a particular kind of complex manifold M whose underlying smooth manifold is a torus in the usual sense. Here N must be the even number 2n, where n is the complex dimension of M.

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.

<span class="mw-page-title-main">Lattice reduction</span>

In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.

The quantum Heisenberg model, developed by Werner Heisenberg, is a statistical mechanical model used in the study of critical points and phase transitions of magnetic systems, in which the spins of the magnetic systems are treated quantum mechanically. It is related to the prototypical Ising model, where at each site of a lattice, a spin represents a microscopic magnetic dipole to which the magnetic moment is either up or down. Except the coupling between magnetic dipole moments, there is also a multipolar version of Heisenberg model called the multipolar exchange interaction.

In number theory, an average order of an arithmetic function is some simpler or better-understood function which takes the same values "on average".

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

In the geometry of numbers, the Klein polyhedron, named after Felix Klein, is used to generalize the concept of continued fractions to higher dimensions.

In mathematics, Minkowski's second theorem is a result in the geometry of numbers about the values taken by a norm on a lattice and the volume of its fundamental cell.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References

  1. Banaszczyk, W. (1993). "New bounds in some transference theorems in the geometry of numbers". Mathematische Annalen. Springer Science and Business Media LLC. 296 (1): 625–635. doi:10.1007/bf01445125. ISSN   0025-5831. S2CID   13921988.
  2. Cohn, Henry; Kumar, Abhinav; Reiher, Christian; Schürmann, Achill (2014). "Formal duality and generalizations of the Poisson summation formula". Discrete Geometry and Algebraic Combinatorics. Contemporary Mathematics. Vol. 625. pp. 123–140. arXiv: 1306.6796v2 . doi:10.1090/conm/625/12495. ISBN   9781470409050. S2CID   117741906.