All variants of the original Cooley-Tukey O(n log n) fast Fourier transform fundamentally exploit different ways to factor the discrete Fourier summation of length N.

For example, the

*split-radix FFT*algorithm divides the Fourier summation of length N into three new Fourier summations: one of length N/2 and two of length N/4.

The

*prime factor FFT*, divides the Fourier summation of length N, into two (if they exist) summations of length N1 and N2, where N1 and N2 must be relatively prime.

These algorithms are typically applied recursively, and in combination with one another (or with still other factorizations) to maximize performance for a particular N.

In modern implementations there really isn’t a single static FFT algorithm, but more a dynamic collection of FFT algorithms and tools that are cleverly collated for the Fourier transform type at hand. Major algorithmic changes occur in the underlying implementation as the length and forward domain (real or complex) of the problem vary. Sophisticated FFT implementations insulate the end-user programmer from all of this background machinery.

##### DFT length is fundamental to performance

The days of power-of-2-only FFT algorithms are dead. Users of modern FFT libraries should not need to worry about the large complexities involved in finding the optimal algorithm for the FFT computation at hand; the library should look at the FFT length, problem domain (real or complex), number of machine cores, and machine architecture, and find and compute with the best hybridized FFT algorithm available. However, it is still helpful to understand that your realized performance will depend fundamentally on the various factorization of the length of your FFT. Most know that the best FFT performance will be had when N is a power of 2. If this stringent length requirement cannot be met, then it is best to use a length that be factored into small primes. CenterSpace’s FFT algorithms contain optimized kernels for prime factor lengths of 2, 3, 5, 7 and 11. The table below demonstrates the FFT performance sensitivity to FFT length.

DFT Length |
Factors |
MFLOP approximation |

512 | 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 | 5324.5 |

511 | 7 x 73 | 1327.8 |

510 | 2 x 3 x 5 x 17 | 3879.4 |

509 | 509 (prime) | 1762.4 |

508 | 2 x 2 x 127 | 2637.6 |

507 | 3 x 13 x 13 | 2631.5 |

506 | 2 x 11 x 23 | 3938.3 |

505 | 5 x 101 | 1122.6 |

504 | 2 x 2 x 2 x 3 x 3 x 7 | 5227 |

Clearly the fastest FFT’s are for lengths that can be factored into small primes (512, 510, 507, 506, 504), and especially small primes that have optimized kernels (512 and 504). The more kernel optimized primes your FFT length contains the faster it will run. This is a universal fact that all FFT implementations confront and holds true for higher dimension FFT’s as well. * Slight changes in length can have a profound impact on FFT performance*.

You can factor your FFT length using an online service to assess how your FFT will perform.

##### Multi-core Scalability

The ability to factor a particular FFT into a set independent computations makes it fundamentally suitable for parallelization. All modern desktop and many laptop computers today contain at least two processor cores and any modern math library should be exploiting this fact where possible. CenterSpace’s complex domain FFT’s (and related convolutions) are multi-core aware, and automatically expand to fully utilize the available processor cores. Small problems are run on a single core, but once the computational advantages of algorithm parallelization overcome the overhead costs of multi-core parallelization, the computation is spread across all available cores. This automatic parallelization is gained simply by using CenterSpace’s NMath class libraries. No end-user programming effort is involved.

FFT Length |
Machine Cores |
Time (seconds) |
MFLOP approximation |
---|---|---|---|

2^20 | One | 56.7 | 6405.9 |

2^20 + 1 | One | 554.6 | 655.3 |

2^20 | Eight | 53.3 | 6813.7 |

2^20 + 1 | Eight | 124.2 | 2925.3 |

The power of two FFT’s are so computationally efficient on modern processors that the gain between one and eight cores is only about 3 seconds on a 2^20-point FFT. However, for the non-power-of-two case we get a 4.5 times speed improvement going from one core to eight. Looked at another way, with multi-core scalability of the FFT, we suffered only a 2X loss in performance going from a 2^20 length FFT to a 2^20+1 length FFT, instead of a 10X loss in performance. In other words, the multi-core scalability of CenterSpace’s NMath FFT algorithms mitigate the performance loss in using non-power-of-2 lengths, and this simplifies the end-user programmer’s job.

* -Paul *

See our FFT landing page for complete documentation and code examples.

Very nice! is this available and I have overlooked it?

Thanks,

Bradley

Hi Bradley,

FFT and convolutions will be included in the next NMath release, expected in November. NMath customers with current maintenance contracts can get early access by joining our beta program (contact sales@centerspace.net).

Thanks,

Ken