cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: FFTW 3.3.5: Combining MPI and Threads cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127:
cannam@127:

cannam@127: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@127:
cannam@127:
cannam@127: cannam@127:

6.11 Combining MPI and Threads

cannam@127: cannam@127: cannam@127:

In certain cases, it may be advantageous to combine MPI cannam@127: (distributed-memory) and threads (shared-memory) parallelization. cannam@127: FFTW supports this, with certain caveats. For example, if you have a cannam@127: cluster of 4-processor shared-memory nodes, you may want to use cannam@127: threads within the nodes and MPI between the nodes, instead of MPI for cannam@127: all parallelization. cannam@127:

cannam@127:

In particular, it is possible to seamlessly combine the MPI FFTW cannam@127: routines with the multi-threaded FFTW routines (see Multi-threaded FFTW). However, some care must be taken in the initialization code, cannam@127: which should look something like this: cannam@127:

cannam@127:
cannam@127:
int threads_ok;
cannam@127: 
cannam@127: int main(int argc, char **argv)
cannam@127: {
cannam@127:     int provided;
cannam@127:     MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
cannam@127:     threads_ok = provided >= MPI_THREAD_FUNNELED;
cannam@127: 
cannam@127:     if (threads_ok) threads_ok = fftw_init_threads();
cannam@127:     fftw_mpi_init();
cannam@127: 
cannam@127:     ...
cannam@127:     if (threads_ok) fftw_plan_with_nthreads(...);
cannam@127:     ...
cannam@127:     
cannam@127:     MPI_Finalize();
cannam@127: }
cannam@127: 
cannam@127: cannam@127: cannam@127: cannam@127: cannam@127:

First, note that instead of calling MPI_Init, you should call cannam@127: MPI_Init_threads, which is the initialization routine defined cannam@127: by the MPI-2 standard to indicate to MPI that your program will be cannam@127: multithreaded. We pass MPI_THREAD_FUNNELED, which indicates cannam@127: that we will only call MPI routines from the main thread. (FFTW will cannam@127: launch additional threads internally, but the extra threads will not cannam@127: call MPI code.) (You may also pass MPI_THREAD_SERIALIZED or cannam@127: MPI_THREAD_MULTIPLE, which requests additional multithreading cannam@127: support from the MPI implementation, but this is not required by cannam@127: FFTW.) The provided parameter returns what level of threads cannam@127: support is actually supported by your MPI implementation; this cannam@127: must be at least MPI_THREAD_FUNNELED if you want to call cannam@127: the FFTW threads routines, so we define a global variable cannam@127: threads_ok to record this. You should only call cannam@127: fftw_init_threads or fftw_plan_with_nthreads if cannam@127: threads_ok is true. For more information on thread safety in cannam@127: MPI, see the cannam@127: MPI and cannam@127: Threads section of the MPI-2 standard. cannam@127: cannam@127:

cannam@127: cannam@127:

Second, we must call fftw_init_threads before cannam@127: fftw_mpi_init. This is critical for technical reasons having cannam@127: to do with how FFTW initializes its list of algorithms. cannam@127:

cannam@127:

Then, if you call fftw_plan_with_nthreads(N), every MPI cannam@127: process will launch (up to) N threads to parallelize its transforms. cannam@127:

cannam@127:

For example, in the hypothetical cluster of 4-processor nodes, you cannam@127: might wish to launch only a single MPI process per node, and then call cannam@127: fftw_plan_with_nthreads(4) on each process to use all cannam@127: processors in the nodes. cannam@127:

cannam@127:

This may or may not be faster than simply using as many MPI processes cannam@127: as you have processors, however. On the one hand, using threads cannam@127: within a node eliminates the need for explicit message passing within cannam@127: the node. On the other hand, FFTW’s transpose routines are not cannam@127: multi-threaded, and this means that the communications that do take cannam@127: place will not benefit from parallelization within the node. cannam@127: Moreover, many MPI implementations already have optimizations to cannam@127: exploit shared memory when it is available, so adding the cannam@127: multithreaded FFTW on top of this may be superfluous. cannam@127: cannam@127:

cannam@127:
cannam@127:
cannam@127:

cannam@127: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@127:
cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: