Chris@19: Chris@19: Chris@19: Advanced distributed-transpose interface - FFTW 3.3.4 Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19:
Chris@19: Chris@19: Chris@19:

Chris@19: Next: , Chris@19: Previous: Basic distributed-transpose interface, Chris@19: Up: FFTW MPI Transposes Chris@19:


Chris@19:
Chris@19: Chris@19:

6.7.2 Advanced distributed-transpose interface

Chris@19: Chris@19:

The above routines are for a transpose of a matrix of numbers (of type Chris@19: double), using FFTW's default block sizes. More generally, one Chris@19: can perform transposes of tuples of numbers, with Chris@19: user-specified block sizes for the input and output: Chris@19: Chris@19:

     fftw_plan fftw_mpi_plan_many_transpose
Chris@19:                      (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany,
Chris@19:                       ptrdiff_t block0, ptrdiff_t block1,
Chris@19:                       double *in, double *out, MPI_Comm comm, unsigned flags);
Chris@19: 
Chris@19:

Chris@19: In this case, one is transposing an n0 by n1 matrix of Chris@19: howmany-tuples (e.g. howmany = 2 for complex numbers). Chris@19: The input is distributed along the n0 dimension with block size Chris@19: block0, and the n1 by n0 output is distributed Chris@19: along the n1 dimension with block size block1. If Chris@19: FFTW_MPI_DEFAULT_BLOCK (0) is passed for a block size then FFTW Chris@19: uses its default block size. To get the local size of the data on Chris@19: each process, you should then call fftw_mpi_local_size_many_transposed. Chris@19: Chris@19: Chris@19: Chris@19: