<< Chapter < Page Chapter >> Page >

In the stock genfft implementation, the schedule is finallyunparsed to C. A variation from [link] implements the rest of a compiler back end and outputs assembly code.

Simd instructions

Unfortunately, it is impossible to attain nearly peak performance on current popular processors while using only portable C code. Instead,a significant portion of the available computing power can only be accessed by using specialized SIMD (single-instruction multiple data)instructions, which perform the same operation in parallel on a data vector. For example, all modern “x86” processors can executearithmetic instructions on “vectors” of four single-precision values (SSE instructions) or two double-precision values (SSE2 instructions)at a time, assuming that the operands are arranged consecutively in memory and satisfy a 16-byte alignment constraint. Fortunately,because nearly all of FFTW's low-level code is produced by genfft, machine-specific instructions could be exploited by modifying the generator—the improvements are then automaticallypropagated to all of FFTW's codelets, and in particular are not limited to a small set of sizes such as powers of two.

SIMD instructions are superficially similar to “vector processors”, which are designed to perform the same operation in parallel on an allelements of a data array (a “vector”). The performance of “traditional” vector processors was best for long vectors that arestored in contiguous memory locations, and special algorithms were developed to implement the DFT efficiently on this kind ofhardware [link] , [link] . Unlike in vector processors, however, the SIMD vector length is small and fixed(usually 2 or 4). Because microprocessors depend on caches for performance, one cannot naively use SIMD instructions to simulate along-vector algorithm: while on vector machines long vectors generally yield better performance, the performance of a microprocessor drops assoon as the data vectors exceed the capacity of the cache. Consequently, SIMD instructions are better seen as a restricted formof instruction-level parallelism than as a degenerate flavor of vector parallelism, and different DFT algorithms are required.

The technique used to exploit SIMD instructions in genfft is mosteasily understood for vectors of length two (e.g., SSE2). In this case, we view a complex DFT as a pair of real DFTs:

DFT ( A + i · B ) = DFT ( A ) + i · DFT ( B ) ,

where A and B are two real arrays. Our algorithm computes the two real DFTs in parallel using SIMD instructions, and then it combinesthe two outputs according to [link] . This SIMD algorithm has two important properties. First, if the data is stored as anarray of complex numbers, as opposed to two separate real and imaginary arrays, the SIMD loads and stores always operate oncorrectly-aligned contiguous locations, even if the the complex numbers themselves have a non-unit stride. Second, because thealgorithm finds two-way parallelism in the real and imaginary parts of a single DFT (as opposed to performing two DFTs in parallel), wecan completely parallelize DFTs of any size, not just even sizes or powers of 2.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Fast fourier transforms. OpenStax CNX. Nov 18, 2012 Download for free at http://cnx.org/content/col10550/1.22
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Fast fourier transforms' conversation and receive update notifications?

Ask