Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:Chris@42: Next: Load balancing, Previous: MPI Data Distribution, Up: MPI Data Distribution [Contents][Index]
Chris@42:As with the planner interface, the ‘fftw_mpi_local_size’ Chris@42: distribution interface is broken into basic and advanced Chris@42: (‘_many’) interfaces, where the latter allows you to specify the Chris@42: block size manually and also to request block sizes when computing Chris@42: multiple transforms simultaneously. These functions are documented Chris@42: more exhaustively by the FFTW MPI Reference, but we summarize the Chris@42: basic ideas here using a couple of two-dimensional examples. Chris@42:
Chris@42:For the 100 × 200 complex-DFT example, above, we would find Chris@42: the distribution by calling the following function in the basic Chris@42: interface: Chris@42:
Chris@42:ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@42:
Given the total size of the data to be transformed (here, n0 =
Chris@42: 100
and n1 = 200
) and an MPI communicator (comm
), this
Chris@42: function provides three numbers.
Chris@42:
First, it describes the shape of the local data: the current process
Chris@42: should store a local_n0
by n1
slice of the overall
Chris@42: dataset, in row-major order (n1
dimension contiguous), starting
Chris@42: at index local_0_start
. That is, if the total dataset is
Chris@42: viewed as a n0
by n1
matrix, the current process should
Chris@42: store the rows local_0_start
to
Chris@42: local_0_start+local_n0-1
. Obviously, if you are running with
Chris@42: only a single MPI process, that process will store the entire array:
Chris@42: local_0_start
will be zero and local_n0
will be
Chris@42: n0
. See Row-major Format.
Chris@42:
Chris@42:
Second, the return value is the total number of data elements (e.g.,
Chris@42: complex numbers for a complex DFT) that should be allocated for the
Chris@42: input and output arrays on the current process (ideally with
Chris@42: fftw_malloc
or an ‘fftw_alloc’ function, to ensure optimal
Chris@42: alignment). It might seem that this should always be equal to
Chris@42: local_n0 * n1
, but this is not the case. FFTW’s
Chris@42: distributed FFT algorithms require data redistributions at
Chris@42: intermediate stages of the transform, and in some circumstances this
Chris@42: may require slightly larger local storage. This is discussed in more
Chris@42: detail below, under Load balancing.
Chris@42:
Chris@42:
Chris@42:
The advanced-interface ‘local_size’ function for multidimensional
Chris@42: transforms returns the same three things (local_n0
,
Chris@42: local_0_start
, and the total number of elements to allocate),
Chris@42: but takes more inputs:
Chris@42:
ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, Chris@42: ptrdiff_t howmany, Chris@42: ptrdiff_t block0, Chris@42: MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, Chris@42: ptrdiff_t *local_0_start); Chris@42:
The two-dimensional case above corresponds to rnk = 2
and an
Chris@42: array n
of length 2 with n[0] = n0
and n[1] = n1
.
Chris@42: This routine is for any rnk > 1
; one-dimensional transforms
Chris@42: have their own interface because they work slightly differently, as
Chris@42: discussed below.
Chris@42:
First, the advanced interface allows you to perform multiple
Chris@42: transforms at once, of interleaved data, as specified by the
Chris@42: howmany
parameter. (hoamany
is 1 for a single
Chris@42: transform.)
Chris@42:
Second, here you can specify your desired block size in the n0
Chris@42: dimension, block0
. To use FFTW’s default block size, pass
Chris@42: FFTW_MPI_DEFAULT_BLOCK
(0) for block0
. Otherwise, on
Chris@42: P
processes, FFTW will return local_n0
equal to
Chris@42: block0
on the first P / block0
processes (rounded down),
Chris@42: return local_n0
equal to n0 - block0 * (P / block0)
on
Chris@42: the next process, and local_n0
equal to zero on any remaining
Chris@42: processes. In general, we recommend using the default block size
Chris@42: (which corresponds to n0 / P
, rounded up).
Chris@42:
Chris@42:
Chris@42:
For example, suppose you have P = 4
processes and n0 =
Chris@42: 21
. The default will be a block size of 6
, which will give
Chris@42: local_n0 = 6
on the first three processes and local_n0 =
Chris@42: 3
on the last process. Instead, however, you could specify
Chris@42: block0 = 5
if you wanted, which would give local_n0 = 5
Chris@42: on processes 0 to 2, local_n0 = 6
on process 3. (This choice,
Chris@42: while it may look superficially more “balanced,” has the same
Chris@42: critical path as FFTW’s default but requires more communications.)
Chris@42:
Chris@42: Next: Load balancing, Previous: MPI Data Distribution, Up: MPI Data Distribution [Contents][Index]
Chris@42: