Discrete Harmonic Analysis СЕМИНАР ПО ДИСКРЕТНОМУ ГАРМОНИЧЕСКОМУ АНАЛИЗУ
И ГЕОМЕТРИЧЕСКОМУ МОДЕЛИРОВАНИЮ
(DHA & CAGD)
 
Computer-Aided Geometric Design

Notes on the FFT

C. S. Burrus

Department of Electrical and Computer Engineering
Rice University, Houston, TX 77251–1892, e-mail: csb@rice.edu
September 29, 1997

This is a note describing results on efficient algorithms to calculate the discrete Fourier transform (DFT). The purpose is to report work done at Rice University, but other contributions used by the DSP research group at Rice are also cited. Perhaps the most interesting is the discovery that the Cooley-Tukey FFT was described by Gauss in 1805 [1]. That gives some indication of the age of research on the topic, and the fact that a recently compiled bibliography [2] on efficient algorithms contains over 3400 entries indicates its volume. Three IEEE Press reprint books contain papers on the FFT [3, 4, 5]. An excellent general purpose FFT program has been described in [6, 7] and is available over the internet.

There are several books [8, 9, 10, 11, 12, 13, 14, 15, 16] that give a good modern theoretical background for this work, one book [17] that gives the basic theory plus both FORTRAN and TMS 320 assembly language programs, and other books [18, 19, 20] that contains a chapter on advanced FFT topics. The history of the FFT is outlined in [21, 1] and excellent survey articles can be found in [22, 23]. The foundation of much of the modern work on efficient algorithms was done by S. Winograd. This can be found in [24, 25, 26]. An outline and discussion of his theorems can be found in [18] as well as [8, 9, 10, 11].

Efficient FFT algorithms for length-2M were described by Gauss and discovered in modern times by Cooley and Tukey [27]. These have been highly developed and good examples of FORTRAN programs can be found in [17]. Several new algorithms have been published that require the least known amount of total arithmetic [28, 29, 30, 31, 32]. Of these, the split-radix FFT [29, 30, 33, 34] seems to have the best structure for programming, and an efficient program has been written [35] to implement it. A mixture of decimation-in-time and decimation-in-frequency with very good efficiency is given in [36]. Theoretical bounds on the number of multiplications required for the FFT based on Winograd's theories are given in [11, 37]. Schemes for calculating an in-place, in-order radix-2 FFT are given in [38, 39, 40]. Discussion of various forms of unscramblers is given in [41, 42, 43, 44, 45, 46, 47, 48, 49]. A discussion of the relation of the computer architecture, algorithm and compiler can be found in [50, 51].

The "other" FFT is the prime factor algorithm (PFA) which uses an index map originally developed by Thomas and by Good. The theory of the PFA was derived in [52] and further developed and an efficient in-order and in-place program given in [53, 17]. More results on the PFA are given in [54, 55, 40, 56, 57]. A method has been developed to use dynamic programming to design optimal FFT programs that minimize the number of additions and data transfers as well as multiplications [58]. This new approach designs custom algorithms for a particular computer architecture. An efficient and practical development of Winograd's ideas has given a design method that does not require the rather difficult Chinese remainder theorem [18, 59] for short prime length FFT's. These ideas have been used to design modules of length 11, 13, 17, 19, and 25 [60]. Other methods for designing short DFT's can be found in [61, 62]. A use of these ideas with distributed arithmetic and table look-up rather than multiplication is given in [63]. A program that implements the nested Winograd Fourier transform algorithm (WFTA) is given in [8] but it has not proven as fast or as versatile as the PFA [53]. An interesting use of the PFA was announced [64] in searching for large prime numbers.

These efficient algorithms can not only be used on DFT's but on other transforms with a similar structure. They have been applied to the discrete Hartley transform [65, 66] and the discrete cosine transform [32, 67, 68].

The fast Hartley transform has been proposed as a superior method for real data analysis but that has been shown not to be the case. A well-designed real-data FFT [69] is always as good as or better than a well-designed Hartley transform [65, 70, 71, 72, 73]. The Bruun algorithm [74, 75] also looks promising for real data applications as does the Rader-Brenner algorithm [76, 77, 72]. A novel approach to calculating the inverse DFT is given in [78].

General length algorithms include [79, 80, 81]. For lengths that are not highly composite or prime, the chirp z-transform in a good candidate [17, 82] for longer lengths and an efficient order-N2 algorithm called the QFT [83, 84, 85] for shorter lengths. A method which automatically generates near-optimal prime length Winograd based programs has been given in [59, 86, 87, 88, 89]. This gives the same efficiency for shorter lengths (i.e. N <= 19) and new algorithms for much longer lengths and with well-structured algorithms. Special methods are available for very long lengths [90, 91]. A very interesting general length FFT system called the FFTW has been developed by Frigo and Johnson at MIT which uses a library of efficient "codelets" which are composed for a very efficient calculation of the DFT on a wide variety of computers [6, 7]. For most lengths and on most computers, this is the fastest FFT today.

The use of the FFT to calculate discrete convolution was one of its earliest uses. Although the more direct rectangular transform [92] would seem to be more efficient, use of the FFT or PFA is still probably the fastest method on a general purpose computer or DSP chip [93, 69, 70, 94] although the use of distributed arithmetic [63] or number theoretic transforms [95] with special hardware may be even faster. Special algorithms for use with the short-time Fourier transform [96] and for the calculation of a few DFT values [97, 98, 99] and for recursive implementation [100] have also been developed. An excellent analysis of efficient programming the FFT on DSP microprocessors is given in [101, 51]. Formulations of the DFT in terms of tensor or Kronecker products look promising for developing algorithms for parallel and vector computer architectures [102, 12, 103, 104, 105, 106, 107].

Various approaches to calculating approximate DFTs have been based on cordic methods, short word lengths, or some form of pruning. A new method that uses the characteristics of the signals being transformed has combined the discrete wavelet transform (DWT) combined with the DFT to give an approximate FFT with O(N) multiplications [108, 109, 110, 111] for certain signal classes.

The study of efficient algorithms not only has a long history and large bibliography, it is still an exciting research field where new results are used in practical applications.

More information can be found on the Rice DSP Group's web page: http://www-dsp.rice.edu and this document can be found at: http://www-dsp.rice.edu/res/fft/fftnote.asc.


Руководитель семинара: проф. В. Н. Малозёмов
© 2004–2014 О. В. Просеков, М. И. Григорьев, Н. В. Чашников
Группы Google
Ваш e-mail: