Chris@10: Chris@10: Chris@10: One-dimensional distributions - FFTW 3.3.3 Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10:
Chris@10: Chris@10: Chris@10:

Chris@10: Previous: Transposed distributions, Chris@10: Up: MPI Data Distribution Chris@10:


Chris@10:
Chris@10: Chris@10:

6.4.4 One-dimensional distributions

Chris@10: Chris@10:

For one-dimensional distributed DFTs using FFTW, matters are slightly Chris@10: more complicated because the data distribution is more closely tied to Chris@10: how the algorithm works. In particular, you can no longer pass an Chris@10: arbitrary block size and must accept FFTW's default; also, the block Chris@10: sizes may be different for input and output. Also, the data Chris@10: distribution depends on the flags and transform direction, in order Chris@10: for forward and backward transforms to work correctly. Chris@10: Chris@10:

     ptrdiff_t fftw_mpi_local_size_1d(ptrdiff_t n0, MPI_Comm comm,
Chris@10:                      int sign, unsigned flags,
Chris@10:                      ptrdiff_t *local_ni, ptrdiff_t *local_i_start,
Chris@10:                      ptrdiff_t *local_no, ptrdiff_t *local_o_start);
Chris@10: 
Chris@10:

Chris@10: This function computes the data distribution for a 1d transform of Chris@10: size n0 with the given transform sign and flags. Chris@10: Both input and output data use block distributions. The input on the Chris@10: current process will consist of local_ni numbers starting at Chris@10: index local_i_start; e.g. if only a single process is used, Chris@10: then local_ni will be n0 and local_i_start will Chris@10: be 0. Similarly for the output, with local_no numbers Chris@10: starting at index local_o_start. The return value of Chris@10: fftw_mpi_local_size_1d will be the total number of elements to Chris@10: allocate on the current process (which might be slightly larger than Chris@10: the local size due to intermediate steps in the algorithm). Chris@10: Chris@10:

As mentioned above (see Load balancing), the data will be divided Chris@10: equally among the processes if n0 is divisible by the Chris@10: square of the number of processes. In this case, Chris@10: local_ni will equal local_no. Otherwise, they may be Chris@10: different. Chris@10: Chris@10:

For some applications, such as convolutions, the order of the output Chris@10: data is irrelevant. In this case, performance can be improved by Chris@10: specifying that the output data be stored in an FFTW-defined Chris@10: “scrambled” format. (In particular, this is the analogue of Chris@10: transposed output in the multidimensional case: scrambled output saves Chris@10: a communications step.) If you pass FFTW_MPI_SCRAMBLED_OUT in Chris@10: the flags, then the output is stored in this (undocumented) scrambled Chris@10: order. Conversely, to perform the inverse transform of data in Chris@10: scrambled order, pass the FFTW_MPI_SCRAMBLED_IN flag. Chris@10: Chris@10: Chris@10:

In MPI FFTW, only composite sizes n0 can be parallelized; we Chris@10: have not yet implemented a parallel algorithm for large prime sizes. Chris@10: Chris@10: Chris@10: Chris@10: