SWIFFT

Last updated
SWIFFT
General
DesignersVadim Lyubashevsky, Daniele Micciancio, Chris Peikert, Alon Rosen
First published2008
Related toFFT-based algorithms

In cryptography, SWIFFT is a collection of provably secure hash functions. It is based on the concept of the fast Fourier transform (FFT). SWIFFT is not the first hash function based on FFT, but it sets itself apart by providing a mathematical proof of its security. It also uses the LLL basis reduction algorithm. It can be shown that finding collisions in SWIFFT is at least as difficult as finding short vectors in cyclic/ideal lattices in the worst case. By giving a security reduction to the worst-case scenario of a difficult mathematical problem, SWIFFT gives a much stronger security guarantee than most other cryptographic hash functions.

Contents

Unlike many other provably secure hash functions, the algorithm is quite fast, yielding a throughput of 40 Mbit/s on a 3.2 GHz Intel Pentium 4. Although SWIFFT satisfies many desirable cryptographic and statistical properties, it was not designed to be an "all-purpose" cryptographic hash function. For example, it is not a pseudorandom function, and would not be a suitable instantiation of a random oracle. The algorithm is less efficient than most traditional hash functions that do not give a proof of their collision-resistance. Therefore, its practical use would lie mostly in applications where the proof of collision-resistance is particularly valuable, such as digital signatures that must remain trustworthy for a long time.

A modification of SWIFFT called SWIFFTX was proposed as a candidate for SHA-3 function to the NIST hash function competition [1] and was rejected in the first round. [2]

The algorithm

The algorithm is as follows: [3]

  1. Let the polynomial variable be called
  2. Input: message of length
  3. Convert to a collection of polynomials in a certain polynomial ring with binary coefficients.
  4. Compute the Fourier coefficients of each using SWIFFT.
  5. Define the Fourier coefficients of , so that they are fixed and depend on a family of SWIFFT.
  6. Point-wise multiply the Fourier coefficients with the Fourier coefficients of for each .
  7. Use inverse FFT to obtain polynomials of degree .
  8. Compute modulo and .
  9. Convert to bits and output it.

Example

We choose concrete values for the parameters n, m, and p as follows: n = 64, m= 16, p= 257. For these parameters, any fixed compression function in the family takes a binary input of length mn = 1024 bits (128 bytes), to an output in the range , which has size . An output in can easily be represented using 528 bits (66 bytes).

Algebraic description

The SWIFFT functions can be described as a simple algebraic expression over some polynomial ring . A family of these functions depends on three main parameters: let be a power of 2, let be a small integer, and let be a modulus (not necessarily prime, but is convenient to choose it prime). Define to be the ring , i.e., the ring of polynomials in having integer coefficients, modulo and . An element of can be written as a polynomial of degree having coefficients in . A certain function in the SWIFFT family is specified by fixed elements of the ring , that are called multipliers. The function corresponds to the following equation over the ring R:

The are polynomials with binary coefficients, and corresponding to the binary input of length .

Computing the polynomial product

To compute the above expression, the main problem is to compute the polynomial products . A fast way to compute these products is given by the convolution theorem. This says that under certain restrictions the following holds:

Here denotes the Fourier transform and denotes the pointwise product. In the general case of the convolution theorem does not denote multiplication but convolution. It can however be shown that polynomial multiplication is a convolution.

Fast Fourier transform

For finding the Fourier transform we will use FFT (fast Fourier transform) which finds the transform in time. The multiplication algorithm now goes as follows: We use FFT to compute (all at once) the Fourier coefficients of each polynomial. Then we pointwise multiply the respective Fourier coefficients of the two polynomials, and finally we use an inverse FFT to return a polynomial of degree .

Number-theoretic transform

Instead of the normal Fourier transform, SWIFFT uses the number-theoretic transform. The number-theoretic transform uses roots of unity in instead of complex roots of unity. To make this work, we need to ensure that is a finite field, and that primitive 2nth roots of unity exist in this field. This can be done by taking prime such that divides .

Parameter choice

The parameters m,p,n are subject to the following restrictions:

A possible choice is n=64, m=16, p=257. We get a throughput of about 40 Mbit/s, security of about operations for finding collisions, and a digest size of 512 bits.

Statistical properties

Cryptographic properties and security

Theoretical security

SWIFFT is an example of a provably secure cryptographic hash function. As with most security proofs, the security proof of SWIFFT relies on a reduction to a certain difficult to solve mathematical problem. Note that this means that the security of SWIFFT relies strongly on the difficulty of this mathematical problem.

The reduction in the case of SWIFFT is to the problem of finding short vectors in cyclic/ideal lattices. It can be proven that the following holds: Suppose we have an algorithm that for a random version of SWIFFT given by can find collisions in within some feasible time , and with probability . It is allowed that the algorithm only works in a small but noticeable fraction of the family SWIFFT. Then we can find also an algorithm which can always find a short vector in any ideal lattice over the ring in some feasible time , depending on and . This means that finding collisions in SWIFFT is at least as difficult as the worst-case scenario of finding short vectors in a lattice over . At the moment the fastest algorithms for finding short vectors are all exponential in . Note that this ensures that there is no significant set of "weak instances" where the security of SWIFFT is weak. This guarantee is not given by most other provably secure hash functions.

Practical security

Known working attacks are the generalized birthday attack, which takes 2106 operations and inversion attacks which takes 2448 operations for a standard parameter choice. This is usually considered to be enough to render an attack by an adversary infeasible.

Related Research Articles

<span class="mw-page-title-main">Discrete Fourier transform</span> Type of Fourier transform in discrete mathematics

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

<span class="mw-page-title-main">Hash function</span> Mapping arbitrary data to fixed-size values

A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output. The values returned by a hash function are called hash values, hash codes, hash digests, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.

In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions is the pointwise product of their Fourier transforms. More generally, convolution in one domain equals point-wise multiplication in the other domain. Other versions of the convolution theorem are applicable to various Fourier-related transforms.

In algebraic number theory, an algebraic integer is a complex number that is integral over the integers. That is, an algebraic integer is a complex root of some monic polynomial whose coefficients are integers. The set of all algebraic integers A is closed under addition, subtraction and multiplication and therefore is a commutative subring of the complex numbers.

Bruun's algorithm is a fast Fourier transform (FFT) algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the discrete Fourier transform (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary Cooley–Tukey FFT algorithm have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley–Tukey in the face of finite numerical precision.

In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.

In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.

In mathematics, the Mahler measureof a polynomial with complex coefficients is defined as

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring.

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

In cryptography, cryptographic hash functions can be divided into two main categories. In the first category are those functions whose designs are based on mathematical problems, and whose security thus follows from rigorous mathematical proofs, complexity theory and formal reduction. These functions are called Provably Secure Cryptographic Hash Functions. To construct these is very difficult, and few examples have been introduced. Their practical use is limited.

The elliptic curve only hash (ECOH) algorithm was submitted as a candidate for SHA-3 in the NIST hash function competition. However, it was rejected in the beginning of the competition since a second pre-image attack was found.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

In cryptography, a public key exchange algorithm is a cryptographic algorithm which allows two parties to create and share a secret key, which they can use to encrypt messages between themselves. The ring learning with errors key exchange (RLWE-KEX) is one of a new class of public key exchange algorithms that are designed to be secure against an adversary that possesses a quantum computer. This is important because some public key algorithms in use today will be easily broken by a quantum computer if such computers are implemented. RLWE-KEX is one of a set of post-quantum cryptographic algorithms which are based on the difficulty of solving certain mathematical problems involving lattices. Unlike older lattice based cryptographic algorithms, the RLWE-KEX is provably reducible to a known hard problem in lattices.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

References

  1. Daniele Micciancio; Yuriy Arbitman; Gil Dogon; Vadim Lyubashevsky; Chris Peikert; Alon Rosen (2008-10-30). "SWIFFTX: A Proposal for the SHA-3 Standard" (PDF). Retrieved 2017-03-03.
  2. "Second Round Candidates". National Institute of Standards and Technology. 2009-07-16. Archived from the original on 2017-06-04. Retrieved 2017-03-03.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
  3. Vadim Lyubashevsky; Daniele Micciancio; Chris Peikert; Alon Rosen (2008-02-21). "SWIFFT: A Modest Proposal for FFT Hashing" (PDF). Retrieved 2017-03-03.