Chris@10: Chris@10:
Chris@10:Chris@10: Next: Distributed-memory FFTW with MPI, Chris@10: Previous: FFTW Reference, Chris@10: Up: Top Chris@10:
In this chapter we document the parallel FFTW routines for Chris@10: shared-memory parallel hardware. These routines, which support Chris@10: parallel one- and multi-dimensional transforms of both real and Chris@10: complex data, are the easiest way to take advantage of multiple Chris@10: processors with FFTW. They work just like the corresponding Chris@10: uniprocessor transform routines, except that you have an extra Chris@10: initialization routine to call, and there is a routine to set the Chris@10: number of threads to employ. Any program that uses the uniprocessor Chris@10: FFTW can therefore be trivially modified to use the multi-threaded Chris@10: FFTW. Chris@10: Chris@10:
A shared-memory machine is one in which all CPUs can directly access Chris@10: the same main memory, and such machines are now common due to the Chris@10: ubiquity of multi-core CPUs. FFTW's multi-threading support allows Chris@10: you to utilize these additional CPUs transparently from a single Chris@10: program. However, this does not necessarily translate into Chris@10: performance gains—when multiple threads/CPUs are employed, there is Chris@10: an overhead required for synchronization that may outweigh the Chris@10: computatational parallelism. Therefore, you can only benefit from Chris@10: threads if your problem is sufficiently large. Chris@10: Chris@10: Chris@10:
Chris@10: Chris@10: Chris@10: Chris@10: