cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: FFTW 3.3.5: MPI Data Distribution cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127:
cannam@127:

cannam@127: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@127:
cannam@127:
cannam@127: cannam@127:

6.4 MPI Data Distribution

cannam@127: cannam@127: cannam@127:

The most important concept to understand in using FFTW’s MPI interface cannam@127: is the data distribution. With a serial or multithreaded FFT, all of cannam@127: the inputs and outputs are stored as a single contiguous chunk of cannam@127: memory. With a distributed-memory FFT, the inputs and outputs are cannam@127: broken into disjoint blocks, one per process. cannam@127:

cannam@127:

In particular, FFTW uses a 1d block distribution of the data, cannam@127: distributed along the first dimension. For example, if you cannam@127: want to perform a 100 × 200 complex DFT, distributed over 4 cannam@127: processes, each process will get a 25 × 200 slice of the data. cannam@127: That is, process 0 will get rows 0 through 24, process 1 will get rows cannam@127: 25 through 49, process 2 will get rows 50 through 74, and process 3 cannam@127: will get rows 75 through 99. If you take the same array but cannam@127: distribute it over 3 processes, then it is not evenly divisible so the cannam@127: different processes will have unequal chunks. FFTW’s default choice cannam@127: in this case is to assign 34 rows to processes 0 and 1, and 32 rows to cannam@127: process 2. cannam@127: cannam@127:

cannam@127: cannam@127:

FFTW provides several ‘fftw_mpi_local_size’ routines that you can cannam@127: call to find out what portion of an array is stored on the current cannam@127: process. In most cases, you should use the default block sizes picked cannam@127: by FFTW, but it is also possible to specify your own block size. For cannam@127: example, with a 100 × 200 array on three processes, you can cannam@127: tell FFTW to use a block size of 40, which would assign 40 rows to cannam@127: processes 0 and 1, and 20 rows to process 2. FFTW’s default is to cannam@127: divide the data equally among the processes if possible, and as best cannam@127: it can otherwise. The rows are always assigned in “rank order,” cannam@127: i.e. process 0 gets the first block of rows, then process 1, and so cannam@127: on. (You can change this by using MPI_Comm_split to create a cannam@127: new communicator with re-ordered processes.) However, you should cannam@127: always call the ‘fftw_mpi_local_size’ routines, if possible, cannam@127: rather than trying to predict FFTW’s distribution choices. cannam@127:

cannam@127:

In particular, it is critical that you allocate the storage size that cannam@127: is returned by ‘fftw_mpi_local_size’, which is not cannam@127: necessarily the size of the local slice of the array. The reason is cannam@127: that intermediate steps of FFTW’s algorithms involve transposing the cannam@127: array and redistributing the data, so at these intermediate steps FFTW cannam@127: may require more local storage space (albeit always proportional to cannam@127: the total size divided by the number of processes). The cannam@127: ‘fftw_mpi_local_size’ functions know how much storage is required cannam@127: for these intermediate steps and tell you the correct amount to cannam@127: allocate. cannam@127:

cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: cannam@127:
cannam@127:
cannam@127:

cannam@127: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@127:
cannam@127: cannam@127: cannam@127: cannam@127: cannam@127: