General | |
---|---|
Designers | Vadim Lyubashevsky, Daniele Micciancio, Chris Peikert, Alon Rosen |
First published | 2008 |
Related to | FFT-based algorithms |
In cryptography, SWIFFT is a collection of provably secure hash functions. It is based on the concept of the fast Fourier transform (FFT). SWIFFT is not the first hash function based on the FFT, but it sets itself apart by providing a mathematical proof of its security. It also uses the LLL basis reduction algorithm. It can be shown that finding collisions in SWIFFT is at least as difficult as finding short vectors in cyclic/ideal lattices in the worst case. By giving a security reduction to the worst-case scenario of a difficult mathematical problem, SWIFFT gives a much stronger security guarantee than most other cryptographic hash functions.
Unlike many other provably secure hash functions, the algorithm is quite fast, yielding a throughput of 40 Mbit/s on a 3.2 GHz Intel Pentium 4. Although SWIFFT satisfies many desirable cryptographic and statistical properties, it was not designed to be an "all-purpose" cryptographic hash function. For example, it is not a pseudorandom function, and would not be a suitable instantiation of a random oracle. The algorithm is less efficient than most traditional hash functions that do not give a proof of their collision-resistance. Therefore, its practical use would lie mostly in applications where the proof of collision-resistance is particularly valuable, such as digital signatures that must remain trustworthy for a long time.
A modification of SWIFFT called SWIFFTX was proposed as a candidate for SHA-3 function to the NIST hash function competition [1] and was rejected in the first round. [2]
The algorithm is as follows: [3]
The FFT operation in step 4 is easy to invert, and is performed to achieve diffusion, that is, to mix the input bits. The linear combination in step 6 achieves confusion, since it compresses the input. This is just a high level description of what the algorithm does, some more advanced optimizations are used to finally yield a high performing algorithm.
Assuming the parameters (n, m, p) = (64, 16, 257), any fixed compression function in the family takes a binary input of length mn = 1024 bits (128 bytes) to an output in the range ℤn
p, which has size pn = 25764. An output in ℤn
p can easily be represented using 528 bits (66 bytes).
The SWIFFT functions can be described as a simple algebraic expression over some polynomial ring R. A family of these functions depends on three main parameters: let n be a power of 2, let m > 0 be a small integer, and let p > 0 be a modulus (not necessarily prime, but is convenient to choose it prime). Define R to be the ring R = ℤp[α]/(αn + 1), i.e., the ring of polynomials in α having integer coefficients, modulo p and αn + 1. An element of R can be written as a polynomial of degree < n having coefficients in ℤp. A certain function in the SWIFFT family is specified by m fixed elements a1, …, am∈R of the ring , that are called multipliers. The function corresponds to the following formula over R:
The x1, …, xm∈R are polynomials with binary coefficients, and corresponding to the binary input of length mn.
To compute the above expression, the main problem is to compute the polynomial products ai⋅xi. A fast way to compute these products is given by the convolution theorem. This says that under certain restrictions, F {f∗g} = F {f} ⋅F {g}, where F denotes the Fourier transform and ⋅ denotes the pointwise product. In the general case of the convolution theorem, ∗ does not denote multiplication but convolution. It can, however, be shown that polynomial multiplication is a convolution.
The fast Fourier transform is used to find the Fourier coefficients of each polynomial, which are then multiplied point-wise with the respective Fourier coefficients of the other polynomial. The resulting coefficients are then transformed to a polynomial of degree < 2n using an inverse fast Fourier transform.
Instead of the normal Fourier transform, SWIFFT uses the number-theoretic transform. The number-theoretic transform uses roots of unity in ℤp instead of complex roots of unity. In order for this to work, ℤp needs to be a finite field and primitive 2nth roots of unity need to exist in this field. This can be done by taking p prime such that 2n divides p− 1.
The parameters m, p, and n are subject to the following restrictions:
A possible choice is (n,m,p) = (64,16,257), which achieves a throughput of about 40 Mbit/s, security of about 2106 operations for finding collisions, and a digest size of 512 bits.
SWIFFT is an example of a provably secure cryptographic hash function. As with most security proofs, the security proof of SWIFFT relies on a reduction to a certain difficult-to-solve mathematical problem. Note that this means that the security of SWIFFT relies strongly on the difficulty of this mathematical problem.
The reduction in the case of SWIFFT is to the problem of finding short vectors in cyclic/ideal lattices. It can be proven that the following holds: Suppose we have an algorithm that, for a random version of SWIFFT given by f, can find collisions in f within some feasible time T, and with probability p. It is allowed that the algorithm only works in a small but noticeable fraction of the SWIFFT family. Then we can find also an algorithm f2 which can always find a short vector in any ideal lattice over the ring ℤp[α]/(αn + 1) in some feasible time T2, depending on T and p. This means that finding collisions in SWIFFT is at least as difficult as the worst-case scenario of finding short vectors in a lattice over ℤp[α]/(αn + 1). At the moment[ when? ], the fastest algorithms for finding short vectors are all exponential in n. Note that this ensures that there is no significant set of "weak instances" where the security of SWIFFT is weak. This guarantee is not given by most other provably secure hash functions.
Known working attacks are the generalized birthday attack, which takes 2106 operations, and inversion attacks which takes 2448 operations for a standard parameter choice. This is usually considered to be enough to render an attack by an adversary infeasible.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
A fast Fourier transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called hash values, hash codes, hash digests, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter-storage addressing.
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.
A cryptographic hash function (CHF) is a hash algorithm that has special properties desirable for a cryptographic application:
In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set. Alternative names are switching function, used especially in older computer science literature, and truth function, used in logic. Boolean functions are the subject of Boolean algebra and switching theory.
In cryptography, a verifiable random function (VRF) is a public-key pseudorandom function that provides proofs that its outputs were calculated correctly. The owner of the secret key can compute the function value as well as an associated proof for any input value. Everyone else, using the proof and the associated public key, can check that this value was indeed calculated correctly, yet this information cannot be used to find the secret key.
In cryptography, collision resistance is a property of cryptographic hash functions: a hash function H is collision-resistant if it is hard to find two inputs that hash to the same output; that is, two inputs a and b where a ≠ b but H(a) = H(b). The pigeonhole principle means that any hash function with more inputs than outputs will necessarily have such collisions; the harder they are to find, the more cryptographically secure the hash function is.
In cryptography, a pseudorandom function family, abbreviated PRF, is a collection of efficiently-computable functions which emulate a random oracle in the following way: no efficient algorithm can distinguish between a function chosen randomly from the PRF family and a random oracle. Pseudorandom functions are vital tools in the construction of cryptographic primitives, especially secure encryption schemes.
In mathematics and computing, universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property. This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known, and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
In cryptography, the Merkle–Damgård construction or Merkle–Damgård hash function is a method of building collision-resistant cryptographic hash functions from collision-resistant one-way compression functions. This construction was used in the design of many popular hash algorithms such as MD5, SHA-1, and SHA-2.
In cryptography, Very Smooth Hash (VSH) is a provably secure cryptographic hash function invented in 2005 by Scott Contini, Arjen Lenstra, and Ron Steinfeld. Provably secure means that finding collisions is as difficult as some known hard mathematical problem. Unlike other provably secure collision-resistant hashes, VSH is efficient and usable in practice. Asymptotically, it only requires a single multiplication per log(n) message-bits and uses RSA-type arithmetic. Therefore, VSH can be useful in embedded environments where code space is limited.
In cryptography, the fast syndrome-based hash functions (FSB) are a family of cryptographic hash functions introduced in 2003 by Daniel Augot, Matthieu Finiasz, and Nicolas Sendrier. Unlike most other cryptographic hash functions in use today, FSB can to a certain extent be proven to be secure. More exactly, it can be proven that breaking FSB is at least as difficult as solving a certain NP-complete problem known as regular syndrome decoding so FSB is provably secure. Though it is not known whether NP-complete problems are solvable in polynomial time, it is often assumed that they are not.
In cryptography, cryptographic hash functions can be divided into two main categories. In the first category are those functions whose designs are based on mathematical problems, and whose security thus follows from rigorous mathematical proofs, complexity theory and formal reduction. These functions are called provably secure cryptographic hash functions. To construct these is very difficult, and few examples have been introduced. Their practical use is limited.
The elliptic curve only hash (ECOH) algorithm was submitted as a candidate for SHA-3 in the NIST hash function competition. However, it was rejected in the beginning of the competition since a second pre-image attack was found.
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
Digital signatures are a means to protect digital information from intentional modification and to authenticate the source of digital information. Public key cryptography provides a rich set of different cryptographic algorithms the create digital signatures. However, the primary public key signatures currently in use will become completely insecure if scientists are ever able to build a moderately sized quantum computer. Post quantum cryptography is a class of cryptographic algorithms designed to be resistant to attack by a quantum cryptography. Several post quantum digital signature algorithms based on hard problems in lattices are being created replace the commonly used RSA and elliptic curve signatures. A subset of these lattice based scheme are based on a problem known as Ring learning with errors. Ring learning with errors based digital signatures are among the post quantum signatures with the smallest public key and signature sizes
In post-quantum cryptography, ring learning with errors (RLWE) is a computational problem which serves as the foundation of new cryptographic algorithms, such as NewHope, designed to protect against cryptanalysis by quantum computers and also to provide the basis for homomorphic encryption. Public-key cryptography relies on construction of mathematical problems that are believed to be hard to solve if no further information is available, but are easy to solve if some information used in the problem construction is known. Some problems of this sort that are currently used in cryptography are at risk of attack if sufficiently large quantum computers can ever be built, so resistant problems are sought. Homomorphic encryption is a form of encryption that allows computation on ciphertext, such as arithmetic on numeric values stored in an encrypted database.
In cryptography, a public key exchange algorithm is a cryptographic algorithm which allows two parties to create and share a secret key, which they can use to encrypt messages between themselves. The ring learning with errors key exchange (RLWE-KEX) is one of a new class of public key exchange algorithms that are designed to be secure against an adversary that possesses a quantum computer. This is important because some public key algorithms in use today will be easily broken by a quantum computer if such computers are implemented. RLWE-KEX is one of a set of post-quantum cryptographic algorithms which are based on the difficulty of solving certain mathematical problems involving lattices. Unlike older lattice based cryptographic algorithms, the RLWE-KEX is provably reducible to a known hard problem in lattices.
Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.
{{cite web}}
: CS1 maint: bot: original URL status unknown (link)