<< Chapter < Page | Chapter >> Page > |
One might get the impression that there is a strict dichotomy that divides cache-aware and cache-oblivious algorithms, but the two arenot mutually exclusive in practice. Given an implementation of a cache-oblivious strategy, one can further optimize it for the cachecharacteristics of a particular machine in order to improve the constant factors. For example, one can tune the radices used, thetransition point between the radix- $\sqrt{n}$ algorithm and the bounded-radix algorithm, or other algorithmic choices as describedin "Memory strategies in FFTW" . The advantage of starting cache-aware tuning with a cache-oblivious approach is that the starting point already exploitsall levels of the cache to some extent, and one has reason to hope that good performance on one machine will be more portable to otherarchitectures than for a purely cache-aware “blocking” approach. In practice, we have found this combination to be very successful withFFTW.
The recursive cache-oblivious strategies described above form a useful starting point, but FFTW supplements them with a number ofadditional tricks, and also exploits cache-obliviousness in less-obvious forms.
We currently find that the general radix- $\sqrt{n}$ algorithm is beneficial only when $n$ becomes very large, on the order of ${2}^{20}\approx {10}^{6}$ . In practice, this means that we use at most a single step of radix- $\sqrt{n}$ (two steps would only be used for $n\gtrsim {2}^{40}$ ). The reason for this is that the implementation of radix $\sqrt{n}$ is less efficient than for a bounded radix: the latter has the advantage that an entire radix butterfly can beperformed in hard-coded loop-free code within local variables/registers, including the necessary permutations and twiddlefactors.
Thus, for more moderate $n$ , FFTW uses depth-first recursion with a bounded radix, similar in spirit to the algorithm of [link] but with much larger radices (radix 32 is common) and base cases (size 32 or 64 iscommon) as produced by the code generator of "Generating Small FFT Kernels" . The self-optimization described in "Adaptive Composition of FFT Algorithms" allows the choice of radix and the transition to the radix- $\sqrt{n}$ algorithm to be tuned in a cache-aware (but entirely automatic) fashion.
For small $n$ (including the radix butterflies and the base cases of the recursion), hard-coded FFTs (FFTW's codelets ) are employed. However, this gives rise to an interesting problem: acodelet for (e.g.) $n=64$ is $\sim 2000$ lines long, with hundreds of variables and over 1000 arithmetic operations that can be executed inmany orders, so what order should be chosen? The key problem here is the efficient use of the CPU registers, which essentially form anearly ideal, fully associative cache. Normally, one relies on the compiler for all code scheduling and register allocation, but but thecompiler needs help with such long blocks of code (indeed, the general register-allocation problem is NP-complete). In particular, FFTW'sgenerator knows more about the code than the compiler—the generator knows it is an FFT, and therefore it can use an optimalcache-oblivious schedule (analogous to the radix- $\sqrt{n}$ algorithm) to order the code independent of the number ofregisters [link] . The compiler is then used only for local “cache-aware” tuning (both for register allocation and the CPUpipeline). One practical difficulty is that some “optimizing” compilers will tend to greatly re-order the code,destroying FFTW's optimal schedule. With GNU gcc, we circumvent this problem by using compiler flags that explicitly disable certain stages of theoptimizer. As a practical matter, one consequence of this scheduler is that FFTW's machine-independent codelets are no slower thanmachine-specific codelets generated by an automated search and optimization over many possible codelet implementations, as performedby the SPIRAL project [link] .
Notification Switch
Would you like to follow the 'Fast fourier transforms' conversation and receive update notifications?