Chris@19: Chris@19:
Chris@19:Chris@19: Next: Load balancing, Chris@19: Previous: MPI Data Distribution, Chris@19: Up: MPI Data Distribution Chris@19:
As with the planner interface, the ‘fftw_mpi_local_size’ Chris@19: distribution interface is broken into basic and advanced Chris@19: (‘_many’) interfaces, where the latter allows you to specify the Chris@19: block size manually and also to request block sizes when computing Chris@19: multiple transforms simultaneously. These functions are documented Chris@19: more exhaustively by the FFTW MPI Reference, but we summarize the Chris@19: basic ideas here using a couple of two-dimensional examples. Chris@19: Chris@19:
For the 100 × 200 complex-DFT example, above, we would find Chris@19: the distribution by calling the following function in the basic Chris@19: interface: Chris@19: Chris@19:
ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@19: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@19:Chris@19:
Chris@19: Given the total size of the data to be transformed (here, n0 =
Chris@19: 100
and n1 = 200
) and an MPI communicator (comm
), this
Chris@19: function provides three numbers.
Chris@19:
Chris@19:
First, it describes the shape of the local data: the current process
Chris@19: should store a local_n0
by n1
slice of the overall
Chris@19: dataset, in row-major order (n1
dimension contiguous), starting
Chris@19: at index local_0_start
. That is, if the total dataset is
Chris@19: viewed as a n0
by n1
matrix, the current process should
Chris@19: store the rows local_0_start
to
Chris@19: local_0_start+local_n0-1
. Obviously, if you are running with
Chris@19: only a single MPI process, that process will store the entire array:
Chris@19: local_0_start
will be zero and local_n0
will be
Chris@19: n0
. See Row-major Format.
Chris@19:
Chris@19:
Chris@19:
Second, the return value is the total number of data elements (e.g.,
Chris@19: complex numbers for a complex DFT) that should be allocated for the
Chris@19: input and output arrays on the current process (ideally with
Chris@19: fftw_malloc
or an ‘fftw_alloc’ function, to ensure optimal
Chris@19: alignment). It might seem that this should always be equal to
Chris@19: local_n0 * n1
, but this is not the case. FFTW's
Chris@19: distributed FFT algorithms require data redistributions at
Chris@19: intermediate stages of the transform, and in some circumstances this
Chris@19: may require slightly larger local storage. This is discussed in more
Chris@19: detail below, under Load balancing.
Chris@19:
Chris@19:
Chris@19:
The advanced-interface ‘local_size’ function for multidimensional
Chris@19: transforms returns the same three things (local_n0
,
Chris@19: local_0_start
, and the total number of elements to allocate),
Chris@19: but takes more inputs:
Chris@19:
Chris@19:
ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, Chris@19: ptrdiff_t howmany, Chris@19: ptrdiff_t block0, Chris@19: MPI_Comm comm, Chris@19: ptrdiff_t *local_n0, Chris@19: ptrdiff_t *local_0_start); Chris@19:Chris@19:
Chris@19: The two-dimensional case above corresponds to rnk = 2
and an
Chris@19: array n
of length 2 with n[0] = n0
and n[1] = n1
.
Chris@19: This routine is for any rnk > 1
; one-dimensional transforms
Chris@19: have their own interface because they work slightly differently, as
Chris@19: discussed below.
Chris@19:
Chris@19:
First, the advanced interface allows you to perform multiple
Chris@19: transforms at once, of interleaved data, as specified by the
Chris@19: howmany
parameter. (hoamany
is 1 for a single
Chris@19: transform.)
Chris@19:
Chris@19:
Second, here you can specify your desired block size in the n0
Chris@19: dimension, block0
. To use FFTW's default block size, pass
Chris@19: FFTW_MPI_DEFAULT_BLOCK
(0) for block0
. Otherwise, on
Chris@19: P
processes, FFTW will return local_n0
equal to
Chris@19: block0
on the first P / block0
processes (rounded down),
Chris@19: return local_n0
equal to n0 - block0 * (P / block0)
on
Chris@19: the next process, and local_n0
equal to zero on any remaining
Chris@19: processes. In general, we recommend using the default block size
Chris@19: (which corresponds to n0 / P
, rounded up).
Chris@19:
Chris@19:
Chris@19:
For example, suppose you have P = 4
processes and n0 =
Chris@19: 21
. The default will be a block size of 6
, which will give
Chris@19: local_n0 = 6
on the first three processes and local_n0 =
Chris@19: 3
on the last process. Instead, however, you could specify
Chris@19: block0 = 5
if you wanted, which would give local_n0 = 5
Chris@19: on processes 0 to 2, local_n0 = 6
on process 3. (This choice,
Chris@19: while it may look superficially more “balanced,” has the same
Chris@19: critical path as FFTW's default but requires more communications.)
Chris@19:
Chris@19:
Chris@19: