Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:Chris@82: Next: Advanced distributed-transpose interface, Previous: FFTW MPI Transposes, Up: FFTW MPI Transposes [Contents][Index]
Chris@82:In particular, suppose that we have an n0
by n1
array in
Chris@82: row-major order, block-distributed across the n0
dimension. To
Chris@82: transpose this into an n1
by n0
array block-distributed
Chris@82: across the n1
dimension, we would create a plan by calling the
Chris@82: following function:
Chris@82:
fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1, Chris@82: double *in, double *out, Chris@82: MPI_Comm comm, unsigned flags); Chris@82:
The input and output arrays (in
and out
) can be the
Chris@82: same. The transpose is actually executed by calling
Chris@82: fftw_execute
on the plan, as usual.
Chris@82:
Chris@82:
The flags
are the usual FFTW planner flags, but support
Chris@82: two additional flags: FFTW_MPI_TRANSPOSED_OUT
and/or
Chris@82: FFTW_MPI_TRANSPOSED_IN
. What these flags indicate, for
Chris@82: transpose plans, is that the output and/or input, respectively, are
Chris@82: locally transposed. That is, on each process input data is
Chris@82: normally stored as a local_n0
by n1
array in row-major
Chris@82: order, but for an FFTW_MPI_TRANSPOSED_IN
plan the input data is
Chris@82: stored as n1
by local_n0
in row-major order. Similarly,
Chris@82: FFTW_MPI_TRANSPOSED_OUT
means that the output is n0
by
Chris@82: local_n1
instead of local_n1
by n0
.
Chris@82:
Chris@82:
Chris@82:
To determine the local size of the array on each process before and
Chris@82: after the transpose, as well as the amount of storage that must be
Chris@82: allocated, one should call fftw_mpi_local_size_2d_transposed
,
Chris@82: just as for a 2d DFT as described in the previous section:
Chris@82:
Chris@82:
ptrdiff_t fftw_mpi_local_size_2d_transposed Chris@82: (ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@82: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@82: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@82:
Again, the return value is the local storage to allocate, which in
Chris@82: this case is the number of real (double
) values rather
Chris@82: than complex numbers as in the previous examples.
Chris@82:
Chris@82: Next: Advanced distributed-transpose interface, Previous: FFTW MPI Transposes, Up: FFTW MPI Transposes [Contents][Index]
Chris@82: