Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:Chris@42: Next: Transposed distributions, Previous: Basic and advanced distribution interfaces, Up: MPI Data Distribution [Contents][Index]
Chris@42:Ideally, when you parallelize a transform over some P Chris@42: processes, each process should end up with work that takes equal time. Chris@42: Otherwise, all of the processes end up waiting on whichever process is Chris@42: slowest. This goal is known as “load balancing.” In this section, Chris@42: we describe the circumstances under which FFTW is able to load-balance Chris@42: well, and in particular how you should choose your transform size in Chris@42: order to load balance. Chris@42:
Chris@42:Load balancing is especially difficult when you are parallelizing over Chris@42: heterogeneous machines; for example, if one of your processors is a Chris@42: old 486 and another is a Pentium IV, obviously you should give the Chris@42: Pentium more work to do than the 486 since the latter is much slower. Chris@42: FFTW does not deal with this problem, however—it assumes that your Chris@42: processes run on hardware of comparable speed, and that the goal is Chris@42: therefore to divide the problem as equally as possible. Chris@42:
Chris@42:For a multi-dimensional complex DFT, FFTW can divide the problem
Chris@42: equally among the processes if: (i) the first dimension
Chris@42: n0
is divisible by P; and (ii), the product of
Chris@42: the subsequent dimensions is divisible by P. (For the advanced
Chris@42: interface, where you can specify multiple simultaneous transforms via
Chris@42: some “vector” length howmany
, a factor of howmany
is
Chris@42: included in the product of the subsequent dimensions.)
Chris@42:
For a one-dimensional complex DFT, the length N
of the data
Chris@42: should be divisible by P squared to be able to divide
Chris@42: the problem equally among the processes.
Chris@42: