Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:Chris@42: Next: Advanced distributed-transpose interface, Previous: FFTW MPI Transposes, Up: FFTW MPI Transposes [Contents][Index]
Chris@42:In particular, suppose that we have an n0 by n1 array in
Chris@42: row-major order, block-distributed across the n0 dimension.  To
Chris@42: transpose this into an n1 by n0 array block-distributed
Chris@42: across the n1 dimension, we would create a plan by calling the
Chris@42: following function:
Chris@42: 
fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42:
The input and output arrays (in and out) can be the
Chris@42: same.  The transpose is actually executed by calling
Chris@42: fftw_execute on the plan, as usual.
Chris@42: 
Chris@42: 
The flags are the usual FFTW planner flags, but support
Chris@42: two additional flags: FFTW_MPI_TRANSPOSED_OUT and/or
Chris@42: FFTW_MPI_TRANSPOSED_IN.  What these flags indicate, for
Chris@42: transpose plans, is that the output and/or input, respectively, are
Chris@42: locally transposed.  That is, on each process input data is
Chris@42: normally stored as a local_n0 by n1 array in row-major
Chris@42: order, but for an FFTW_MPI_TRANSPOSED_IN plan the input data is
Chris@42: stored as n1 by local_n0 in row-major order.  Similarly,
Chris@42: FFTW_MPI_TRANSPOSED_OUT means that the output is n0 by
Chris@42: local_n1 instead of local_n1 by n0.
Chris@42: 
Chris@42: 
Chris@42: 
To determine the local size of the array on each process before and
Chris@42: after the transpose, as well as the amount of storage that must be
Chris@42: allocated, one should call fftw_mpi_local_size_2d_transposed,
Chris@42: just as for a 2d DFT as described in the previous section:
Chris@42: 
Chris@42: 
ptrdiff_t fftw_mpi_local_size_2d_transposed Chris@42: (ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42:
Again, the return value is the local storage to allocate, which in
Chris@42: this case is the number of real (double) values rather
Chris@42: than complex numbers as in the previous examples.
Chris@42: 
Chris@42: Next: Advanced distributed-transpose interface, Previous: FFTW MPI Transposes, Up: FFTW MPI Transposes [Contents][Index]
Chris@42: