Chris@10: Chris@10:
Chris@10:Chris@10: Next: Combining MPI and Threads, Chris@10: Previous: Avoiding MPI Deadlocks, Chris@10: Up: Distributed-memory FFTW with MPI Chris@10:
In this section, we collect a few tips on getting the best performance Chris@10: out of FFTW's MPI transforms. Chris@10: Chris@10:
First, because of the 1d block distribution, FFTW's parallelization is Chris@10: currently limited by the size of the first dimension. Chris@10: (Multidimensional block distributions may be supported by a future Chris@10: version.) More generally, you should ideally arrange the dimensions so Chris@10: that FFTW can divide them equally among the processes. See Load balancing. Chris@10: Chris@10: Chris@10:
Second, if it is not too inconvenient, you should consider working Chris@10: with transposed output for multidimensional plans, as this saves a Chris@10: considerable amount of communications. See Transposed distributions. Chris@10: Chris@10: Chris@10:
Third, the fastest choices are generally either an in-place transform
Chris@10: or an out-of-place transform with the FFTW_DESTROY_INPUT
flag
Chris@10: (which allows the input array to be used as scratch space). In-place
Chris@10: is especially beneficial if the amount of data per process is large.
Chris@10:
Chris@10:
Chris@10:
Fourth, if you have multiple arrays to transform at once, rather than Chris@10: calling FFTW's MPI transforms several times it usually seems to be Chris@10: faster to interleave the data and use the advanced interface. (This Chris@10: groups the communications together instead of requiring separate Chris@10: messages for each transform.) Chris@10: Chris@10: Chris@10: Chris@10: