Chris@10: Chris@10: Chris@10: Advanced distributed-transpose interface - FFTW 3.3.3 Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10:
Chris@10: Chris@10: Chris@10:

Chris@10: Next: , Chris@10: Previous: Basic distributed-transpose interface, Chris@10: Up: FFTW MPI Transposes Chris@10:


Chris@10:
Chris@10: Chris@10:

6.7.2 Advanced distributed-transpose interface

Chris@10: Chris@10:

The above routines are for a transpose of a matrix of numbers (of type Chris@10: double), using FFTW's default block sizes. More generally, one Chris@10: can perform transposes of tuples of numbers, with Chris@10: user-specified block sizes for the input and output: Chris@10: Chris@10:

     fftw_plan fftw_mpi_plan_many_transpose
Chris@10:                      (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany,
Chris@10:                       ptrdiff_t block0, ptrdiff_t block1,
Chris@10:                       double *in, double *out, MPI_Comm comm, unsigned flags);
Chris@10: 
Chris@10:

Chris@10: In this case, one is transposing an n0 by n1 matrix of Chris@10: howmany-tuples (e.g. howmany = 2 for complex numbers). Chris@10: The input is distributed along the n0 dimension with block size Chris@10: block0, and the n1 by n0 output is distributed Chris@10: along the n1 dimension with block size block1. If Chris@10: FFTW_MPI_DEFAULT_BLOCK (0) is passed for a block size then FFTW Chris@10: uses its default block size. To get the local size of the data on Chris@10: each process, you should then call fftw_mpi_local_size_many_transposed. Chris@10: Chris@10: Chris@10: Chris@10: