A discrete Hartley transform (DHT) is a Fourier-related transform of discrete, periodic data similar to the discrete Fourier transform (DFT), with analogous applications in signal processing and related fields. This is a list of Linear transformations of functions related to Fourier analysis. In Mathematics, the discrete Fourier transform (DFT is one of the specific forms of Fourier analysis. Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted Just as the DFT is the discrete analogue of the continuous Fourier transform, the DHT is the discrete analogue of the continuous Hartley transform, introduced by R. V. L. Hartley in 1942. This article specifically discusses Fourier transformation of functions on the Real line; for other kinds of Fourier transformation see Fourier analysis and In Mathematics, the Hartley transform is an Integral transform closely related to the Fourier transform, but which transforms real-valued functions to real-valued Ralph Vinton Lyon Hartley ( November 30, 1888 – May 1, 1970) was an Electronics researcher
Because there are fast algorithms for the DHT analogous to the fast Fourier transform (FFT), the DHT was originally proposed by R. N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. Ronald Newbold Bracewell AO ( July 22 1921 &ndash August 12 2007) was the Lewis M It was subsequently argued, however, that specialized FFT algorithms for real inputs or outputs can ordinarily be found with slightly fewer operations than any corresponding algorithm for the DHT (see below).
Formally, the discrete Hartley transform is a linear, invertible function H : Rn -> Rn (where R denotes the set of real numbers). The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function In Mathematics, the real numbers may be described informally in several different ways The N real numbers x0, . . . . , xN-1 are transformed into the N real numbers H0, . . . , HN-1 according to the formula
where π is Pi. IMPORTANT NOTICE Please note that Wikipedia is not a database to store the millions of digits of π please refrain from adding those to Wikipedia as it could cause technical problems The combination is sometimes denoted , and should be contrasted with the that appears in the DFT definition (where i is the imaginary unit). Geometric interpretation Geometrically imaginary numbers are found on the vertical axis of the complex number plane
Like for the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention, and differ in some treatments, but do not affect the essential properties.
The transform can be interpreted as the multiplication of the vector (x0, . . . . , xN-1) by an N-by-N matrix; therefore, the discrete Hartley transform is a linear operator. In Mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries) which may be Numbers or more generally In Mathematics, a linear map (also called a linear transformation, or linear operator) is a function between two Vector spaces that The matrix is invertible; the inverse transformation, which allows one to recover the xn from the Hk, is simply the DHT of Hk multiplied by 1/N. That is, the DHT is its own inverse (involutary), up to an overall scale factor.
The DHT can be used to compute the DFT, and vice versa. For real inputs xn, the DFT output Xk has a real part (Hk + HN-k)/2 and an imaginary part (HN-k - Hk)/2. Conversely, the DHT is equivalent to computing the DFT of xn multiplied by 1+i, then taking the real part of the result.
As with the DFT, a cyclic convolution z = x*y of two vectors x = (xn) and y = (yn) to produce a vector z = (zn), all of length N, becomes a simple operation after the DHT. In Mathematics and in particular Functional analysis, convolution is a mathematical operation on two functions f and In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:
where we take all of the vectors to be periodic in N (XN = X0, etcetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution. (Note that this is slightly more expensive than the corresponding procedure for the DFT, not including the costs of the transforms below, because the pairwise operation above requires 8 real-arithmetic operations compared to the 6 of a complex multiplication. This count doesn't include the division by 2, which can be absorbed e. g. into the 1/N normalization of the inverse DHT. )
Just as for the DFT, evaluating the DHT definition directly would require O(N2) arithmetical operations (see Big O notation). In mathematics big O notation (so called because it uses the symbol O) describes the limiting behavior of a function for very small or very large arguments There are fast algorithms similar to the FFT, however, that compute the same result in only O(N log N) operations. Nearly every FFT algorithm, from Cooley-Tukey to Prime-Factor to Winograd (Sorensen et al. The Cooley-Tukey Algorithm, named after JW Cooley and John Tukey, is the most common Fast Fourier transform (FFT algorithm The Prime-factor algorithm (PFA also called the Good-Thomas algorithm (1958/1963 is a Fast Fourier transform (FFT algorithm that re-expresses the Discrete , 1985) to Bruun's (Bini & Bozzo, 1993), has a direct analogue for the discrete Hartley transform. Bruun's algorithm is a Fast Fourier transform (FFT algorithm based on an unusual recursive Polynomial -factorization approach proposed for powers of two by G (However, a few of the more exotic FFT algorithms, such as the QFT, have not yet been investigated in the context of the DHT. )
In particular, the DHT analogue of the Cooley-Tukey algorithm is commonly known as the Fast Hartley Transform (FHT) algorithm, and was first described by Bracewell in 1984. A discrete Hartley transform (DHT is a Fourier-related transform of discrete periodic data similar to the Discrete Fourier transform (DFT with analogous applications This FHT algorithm, at least when applied to power-of-two sizes N, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University. In Mathematics, a power of two is any of the Integer powers of the number two; in other words two multiplied by itself a certain Software patent does not have a universally accepted definition Leland Stanford Junior University, commonly known as Stanford University or simply Stanford, is a private Research university located in Stanford placed this patent in the public domain in 1994 (Bracewell, 1995).
As mentioned above, DHT algorithms are typically slightly less efficient (in terms of the number of floating-point operations) than the corresponding DFT algorithm (FFT) specialized for real inputs (or outputs). In Computing, floating point describes a system for numerical representation in which a string of digits (or Bits represents a Real number. This was first argued by Sorensen et al. (1987) and Duhamel & Vetterli (1987). The latter authors obtained what appears to be the lowest published operation count for the DHT of power-of-two sizes, employing a split-radix algorithm (similar to the split-radix FFT) that breaks a DHT of length N into a DHT of length N/2 and two real-input DFTs (not DHTs) of length N/4. The split-radix FFT is a Fast Fourier transform (FFT algorithm for computing the Discrete Fourier transform (DFT and was first described in an obscure paper by In this way, they argued that a DHT of power-of-two length can be computed with, at best, 2 more additions than the corresponding number of arithmetic operations for the real-input DFT.
On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts, and a slight difference in arithmetic cost is unlikely to be significant. In Computing, a pipeline is a set of data processing elements connected in series so that the output of one element is the input of the next one Since FHT and real-input FFT algorithms have similar computational structures, neither appears to have a substantial a priori speed advantage (Popovic and Sevic, 1994). As a practical matter, highly optimized real-input FFT libraries are available from many sources (e. g. from CPU vendors such as Intel), whereas highly optimized DHT libraries are less common.
On the other hand, the redundant computations in FFTs due to real inputs are more difficult to eliminate for large prime N, despite the existence of O(N log N) complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In Mathematics, a prime number (or a prime) is a Natural number which has exactly two distinct natural number Divisors 1 In contrast, a standard prime-size FFT algorithm, Rader's algorithm, can be directly applied to the DHT of real data for roughly a factor of two less computation than that of the equivalent complex FFT (Frigo and Johnson, 2005). Rader's algorithm (1968 is a Fast Fourier transform (FFT algorithm that computes the Discrete Fourier transform (DFT of prime sizes by re-expressing On the other hand, a non-DHT-based adaptation of Rader's algorithm for real-input DFTs is also possible (Chu & Burrus, 1982).