Chris@10: Chris@10:
Chris@10:Chris@10: Next: Transposed distributions, Chris@10: Previous: Basic and advanced distribution interfaces, Chris@10: Up: MPI Data Distribution Chris@10:
Chris@10: Ideally, when you parallelize a transform over some P Chris@10: processes, each process should end up with work that takes equal time. Chris@10: Otherwise, all of the processes end up waiting on whichever process is Chris@10: slowest. This goal is known as “load balancing.” In this section, Chris@10: we describe the circumstances under which FFTW is able to load-balance Chris@10: well, and in particular how you should choose your transform size in Chris@10: order to load balance. Chris@10: Chris@10:
Load balancing is especially difficult when you are parallelizing over Chris@10: heterogeneous machines; for example, if one of your processors is a Chris@10: old 486 and another is a Pentium IV, obviously you should give the Chris@10: Pentium more work to do than the 486 since the latter is much slower. Chris@10: FFTW does not deal with this problem, however—it assumes that your Chris@10: processes run on hardware of comparable speed, and that the goal is Chris@10: therefore to divide the problem as equally as possible. Chris@10: Chris@10:
For a multi-dimensional complex DFT, FFTW can divide the problem
Chris@10: equally among the processes if: (i) the first dimension
Chris@10: n0
is divisible by P; and (ii), the product of
Chris@10: the subsequent dimensions is divisible by P. (For the advanced
Chris@10: interface, where you can specify multiple simultaneous transforms via
Chris@10: some “vector” length howmany
, a factor of howmany
is
Chris@10: included in the product of the subsequent dimensions.)
Chris@10:
Chris@10:
For a one-dimensional complex DFT, the length N
of the data
Chris@10: should be divisible by P squared to be able to divide
Chris@10: the problem equally among the processes.
Chris@10:
Chris@10:
Chris@10: