cannam@127: cannam@127: cannam@127: cannam@127: cannam@127:
cannam@127:cannam@127: Previous: Transposed distributions, Up: MPI Data Distribution [Contents][Index]
cannam@127:For one-dimensional distributed DFTs using FFTW, matters are slightly cannam@127: more complicated because the data distribution is more closely tied to cannam@127: how the algorithm works. In particular, you can no longer pass an cannam@127: arbitrary block size and must accept FFTW’s default; also, the block cannam@127: sizes may be different for input and output. Also, the data cannam@127: distribution depends on the flags and transform direction, in order cannam@127: for forward and backward transforms to work correctly. cannam@127:
cannam@127:ptrdiff_t fftw_mpi_local_size_1d(ptrdiff_t n0, MPI_Comm comm, cannam@127: int sign, unsigned flags, cannam@127: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, cannam@127: ptrdiff_t *local_no, ptrdiff_t *local_o_start); cannam@127:
This function computes the data distribution for a 1d transform of
cannam@127: size n0
with the given transform sign
and flags
.
cannam@127: Both input and output data use block distributions. The input on the
cannam@127: current process will consist of local_ni
numbers starting at
cannam@127: index local_i_start
; e.g. if only a single process is used,
cannam@127: then local_ni
will be n0
and local_i_start
will
cannam@127: be 0
. Similarly for the output, with local_no
numbers
cannam@127: starting at index local_o_start
. The return value of
cannam@127: fftw_mpi_local_size_1d
will be the total number of elements to
cannam@127: allocate on the current process (which might be slightly larger than
cannam@127: the local size due to intermediate steps in the algorithm).
cannam@127:
As mentioned above (see Load balancing), the data will be divided
cannam@127: equally among the processes if n0
is divisible by the
cannam@127: square of the number of processes. In this case,
cannam@127: local_ni
will equal local_no
. Otherwise, they may be
cannam@127: different.
cannam@127:
For some applications, such as convolutions, the order of the output
cannam@127: data is irrelevant. In this case, performance can be improved by
cannam@127: specifying that the output data be stored in an FFTW-defined
cannam@127: “scrambled” format. (In particular, this is the analogue of
cannam@127: transposed output in the multidimensional case: scrambled output saves
cannam@127: a communications step.) If you pass FFTW_MPI_SCRAMBLED_OUT
in
cannam@127: the flags, then the output is stored in this (undocumented) scrambled
cannam@127: order. Conversely, to perform the inverse transform of data in
cannam@127: scrambled order, pass the FFTW_MPI_SCRAMBLED_IN
flag.
cannam@127:
cannam@127:
cannam@127:
In MPI FFTW, only composite sizes n0
can be parallelized; we
cannam@127: have not yet implemented a parallel algorithm for large prime sizes.
cannam@127:
cannam@127: Previous: Transposed distributions, Up: MPI Data Distribution [Contents][Index]
cannam@127: