A discrete Hartley transform (DHT) is a Fourier-related transform of discrete, periodic data similar to the discrete Fourier transform (DFT), with analogous applications in signal processing and related fields. Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers. Just as the DFT is the discrete analogue of the continuous Fourier transform (FT), the DHT is the discrete analogue of the continuous Hartley transform (HT), introduced by Ralph V. L. Hartley in 1942. [1]
Because there are fast algorithms for the DHT analogous to the fast Fourier transform (FFT), the DHT was originally proposed by Ronald N. Bracewell in 1983 [2] as a more efficient computational tool in the common case where the data are purely real. It was subsequently argued, however, that specialized FFT algorithms for real inputs or outputs can ordinarily be found with slightly fewer operations than any corresponding algorithm for the DHT.
Formally, the discrete Hartley transform is a linear, invertible function H: Rn → Rn (where R denotes the set of real numbers). The N real numbers x0, ..., xN−1 are transformed into the N real numbers H0, ..., HN−1 according to the formula
The combination is sometimes denoted cas(z), and should not be confused with cis(z) = eiz = cos(z) + i sin(z), or e−iz = cis(−z) which appears in the DFT definition (where i is the imaginary unit).
As with the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention. Although these conventions occasionally vary between authors, they do not affect the essential properties of the transform.
The transform can be interpreted as the multiplication of the vector (x0, ...., xN−1) by an N-by-N matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the xn from the Hk, is simply the DHT of Hk multiplied by 1/N. That is, the DHT is its own inverse (involutory), up to an overall scale factor.
The DHT can be used to compute the DFT, and vice versa. For real inputs xn, the DFT output Xk has a real part (Hk + HN−k)/2 and an imaginary part (HN−k − Hk)/2. Conversely, the DHT is equivalent to computing the DFT of xn multiplied by 1 + i, then taking the real part of the result.
As with the DFT, a cyclic convolution z = x∗y of two vectors x = (xn) and y = (yn) to produce a vector z = (zn), all of length N, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:
where we take all of the vectors to be periodic in N (XN = X0, et cetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution. (This is slightly more expensive than the corresponding procedure for the DFT, not including the costs of the transforms below, because the pairwise operation above requires 8 real-arithmetic operations compared to the 6 of a complex multiplication. This count doesn't include the division by 2, which can be absorbed e.g. into the 1/N normalization of the inverse DHT.)
Just as for the DFT, evaluating the DHT definition directly would require O(N2) arithmetical operations (see Big O notation). There are fast algorithms similar to the FFT, however, that compute the same result in only O(N log N) operations. Nearly every FFT algorithm, from Cooley–Tukey to prime-factor to Winograd (1985) [3] to Bruun's (1993), [4] has a direct analogue for the discrete Hartley transform. (However, a few of the more exotic FFT algorithms, such as the QFT, have not yet been investigated in the context of the DHT.)
In particular, the DHT analogue of the Cooley–Tukey algorithm is commonly known as the fast Hartley transform (FHT) algorithm, and was first described by Bracewell in 1984. [5] This FHT algorithm, at least when applied to power-of-two sizes N, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University. Stanford placed this patent in the public domain in 1994 (Bracewell, 1995). [6]
As mentioned above, DHT algorithms are typically slightly less efficient (in terms of the number of floating-point operations) than the corresponding DFT algorithm (FFT) specialized for real inputs (or outputs). This was first argued by Sorensen et al. (1987) [7] and Duhamel & Vetterli (1987). [8] The latter authors obtained what appears to be the lowest published operation count for the DHT of power-of-two sizes, employing a split-radix algorithm (similar to the split-radix FFT) that breaks a DHT of length N into a DHT of length N/2 and two real-input DFTs (not DHTs) of length N/4. In this way, they argued that a DHT of power-of-two length can be computed with, at best, 2 more additions than the corresponding number of arithmetic operations for the real-input DFT.
On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts, and a slight difference in arithmetic cost is unlikely to be significant. Since FHT and real-input FFT algorithms have similar computational structures, neither appears to have a substantial a priori speed advantage (Popović and Šević, 1994). [9] As a practical matter, highly optimized real-input FFT libraries are available from many sources (e.g. from CPU vendors such as Intel), whereas highly optimized DHT libraries are less common.
On the other hand, the redundant computations in FFTs due to real inputs are more difficult to eliminate for large prime N, despite the existence of O(N log N) complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a standard prime-size FFT algorithm, Rader's algorithm, can be directly applied to the DHT of real data for roughly a factor of two less computation than that of the equivalent complex FFT (Frigo and Johnson, 2005). [10] On the other hand, a non-DHT-based adaptation of Rader's algorithm for real-input DFTs is also possible (Chu & Burrus, 1982). [11]
The rD-DHT (MD-DHT with "r" dimensions) is given by
with and where
Similar to the 1-D case, as a real and symmetric transform, the MD-DHT is simpler than the MD-DFT. For one, the inverse DHT is identical to the forward transform, with the addition of a scaling factor;
and second, since the kernel is real, it avoids the computational complexity of complex numbers. Additionally, the DFT is directly obtainable from the DHT by a simple additive operation (Bracewell, 1983). [2]
The MD-DHT is widely used in areas like image and optical signal processing. Specific applications include computer vision, high-definition television, and teleconferencing, areas that process or analyze motion images (Zeng, 2000). [12]
As computing speed keeps increasing, bigger multidimensional problems become computationally feasible, requiring the need for fast multidimensional algorithms. Three such algorithms follow.
In pursuit of separability for efficiency, we consider the following transform (Bracewell, 1983), [2]
It was shown in Bortfeld (1995), [13] that the two can be related by a few additions. For example, in 3-D,
For , row-column algorithms can then be implemented. This technique is commonly used due to the simplicity of such R-C algorithms, but they are not optimized for general M-D spaces.
Other fast algorithms have been developed, such as radix-2, radix-4, and split radix. For example, Boussakta (2000) [14] developed the 3-D vector radix,
It was also presented in Boussakta (2000) [14] that this 3D-vector radix algorithm takes multiplications and additions compared to multiplications and additions from the row-column approach. The drawback is that the implementation of these radix-type of algorithms is hard to generalize for signals of arbitrary dimensions.
Number theoretic transforms have also been used for solving the MD-DHT, since they perform extremely fast convolutions. In Boussakta (1988), [15] it was shown how to decompose the MD-DHT transform into a form consisting of convolutions:
For the 2-D case (the 3-D case is also covered in the stated reference),
,
can be decomposed into 1-D and 2-D circular convolutions as follows,
where
Developing further,
At this point we present the Fermat number transform (FNT). The tth Fermat number is given by , with . The well known Fermat numbers are for ( is prime for ), (Boussakta, 1988). [15] The Fermat number transform is given by
with . and are roots of unity of order and respectively .
Going back to the decomposition, the last term for will be denoted as , then
If and are primitive roots of and (which are guaranteed to exist if and are prime) then and map to So, mapping and to and , one gets the following,
.
Which is now a circular convolution. With , , and , one has
where denotes term by term multiplication. It was also stated in (Boussakta, 1988) [15] that this algorithm reduces the number of multiplications by a factor of 8–20 over other DHT algorithms at a cost of a slight increase in the number of shift and add operations, which are assumed to be simpler than multiplications. The drawback of this algorithm is the constraint that each dimension of the transform has a primitive root.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
A fast Fourier transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images, digital video, digital audio, digital television, digital radio, and speech coding. DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
In mathematics, the discrete sine transform (DST) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using a purely real matrix. It is equivalent to the imaginary parts of a DFT of roughly twice the length, operating on real data with odd symmetry (since the Fourier transform of a real and odd function is imaginary and odd), where in some variants the input and/or output data are shifted by half a sample.
Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime sizes by re-expressing the DFT as a cyclic convolution.
The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, corresponding to straight lines in the S plane. The DFT, real DFT, and zoom DFT can be calculated as special cases of the CZT.
In signal processing and statistics, a window function is a mathematical function that is zero-valued outside of some chosen interval. Typically, window functions are symmetric around the middle of the interval, approach a maximum in the middle, and taper away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions.
Bruun's algorithm is a fast Fourier transform (FFT) algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the discrete Fourier transform (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary Cooley–Tukey FFT algorithm have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley–Tukey in the face of finite numerical precision.
The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size in terms of N1 smaller DFTs of sizes N2, recursively, to reduce the computation time to O(N log N) for highly composite N (smooth numbers). Because of the algorithm's importance, specific variants and implementation styles have become known by their own names, as described below.
In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971. It works by recursively applying fast Fourier transform (FFT) over the integers modulo . The run-time bit complexity to multiply two n-digit numbers using the algorithm is in big O notation.
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa. The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.
In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith. With small modifications to the QFT, it can also be used for performing fast integer arithmetic operations such as addition and multiplication.
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.
Similar to 1-D Digital signal processing in case of the Multidimensional signal processing we have Efficient algorithms. The efficiency of an Algorithm can be evaluated by the amount of computational resources it takes to compute output or the quantity of interest. In this page, two of the very efficient algorithms for multidimensional signals are explained. For the sake of simplicity and description it is explained for 2-D Signals. However, same theory holds good for M-D signals. The exact computational savings for each algorithm is also mentioned.
The fast Fourier transform (FFT) is an important tool in the fields of image and signal processing. The hexagonal fast Fourier transform (HFFT) uses existing FFT routines to compute the discrete Fourier transform (DFT) of images that have been captured with hexagonal sampling. The hexagonal grid serves as the optimal sampling lattice for isotropically band-limited two-dimensional signals and has a sampling efficiency which is 13.4% greater than the sampling efficiency obtained from rectangular sampling. Several other advantages of hexagonal sampling include consistent connectivity, higher symmetry, greater angular resolution, and equidistant neighbouring pixels. Sometimes, more than one of these advantages compound together, thereby increasing the efficiency by 50% in terms of computation and storage when compared to rectangular sampling. Despite all of these advantages of hexagonal sampling over rectangular sampling, its application has been limited because of the lack of an efficient coordinate system. However that limitation has been removed with the recent development of the hexagonal efficient coordinate system which includes the benefit of a separable Fourier kernel. The existence of a separable Fourier kernel for a hexagonally sampled image allows the use of existing FFT routines to efficiently compute the DFT of such an image.
The vector-radix FFT algorithm, is a multidimensional fast Fourier transform (FFT) algorithm, which is a generalization of the ordinary Cooley–Tukey FFT algorithm that divides the transform dimensions by arbitrary radices. It breaks a multidimensional (MD) discrete Fourier transform (DFT) down into successively smaller MD DFTs until, ultimately, only trivial MD DFTs need to be evaluated.