Chris@10: Chris@10: Chris@10: Basic distributed-transpose interface - FFTW 3.3.3 Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10:
Chris@10: Chris@10: Chris@10:

Chris@10: Next: , Chris@10: Previous: FFTW MPI Transposes, Chris@10: Up: FFTW MPI Transposes Chris@10:


Chris@10:
Chris@10: Chris@10:

6.7.1 Basic distributed-transpose interface

Chris@10: Chris@10:

In particular, suppose that we have an n0 by n1 array in Chris@10: row-major order, block-distributed across the n0 dimension. To Chris@10: transpose this into an n1 by n0 array block-distributed Chris@10: across the n1 dimension, we would create a plan by calling the Chris@10: following function: Chris@10: Chris@10:

     fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1,
Chris@10:                                        double *in, double *out,
Chris@10:                                        MPI_Comm comm, unsigned flags);
Chris@10: 
Chris@10:

Chris@10: The input and output arrays (in and out) can be the Chris@10: same. The transpose is actually executed by calling Chris@10: fftw_execute on the plan, as usual. Chris@10: Chris@10: Chris@10:

The flags are the usual FFTW planner flags, but support Chris@10: two additional flags: FFTW_MPI_TRANSPOSED_OUT and/or Chris@10: FFTW_MPI_TRANSPOSED_IN. What these flags indicate, for Chris@10: transpose plans, is that the output and/or input, respectively, are Chris@10: locally transposed. That is, on each process input data is Chris@10: normally stored as a local_n0 by n1 array in row-major Chris@10: order, but for an FFTW_MPI_TRANSPOSED_IN plan the input data is Chris@10: stored as n1 by local_n0 in row-major order. Similarly, Chris@10: FFTW_MPI_TRANSPOSED_OUT means that the output is n0 by Chris@10: local_n1 instead of local_n1 by n0. Chris@10: Chris@10: Chris@10:

To determine the local size of the array on each process before and Chris@10: after the transpose, as well as the amount of storage that must be Chris@10: allocated, one should call fftw_mpi_local_size_2d_transposed, Chris@10: just as for a 2d DFT as described in the previous section: Chris@10: Chris@10:

     ptrdiff_t fftw_mpi_local_size_2d_transposed
Chris@10:                      (ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@10:                       ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@10:                       ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@10: 
Chris@10:

Chris@10: Again, the return value is the local storage to allocate, which in Chris@10: this case is the number of real (double) values rather Chris@10: than complex numbers as in the previous examples. Chris@10: Chris@10: Chris@10: