Chris@10: Chris@10: Chris@10: MPI Data Distribution Functions - FFTW 3.3.3 Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10: Chris@10:
Chris@10: Chris@10:

Chris@10: Next: , Chris@10: Previous: Using MPI Plans, Chris@10: Up: FFTW MPI Reference Chris@10:


Chris@10:
Chris@10: Chris@10:

6.12.4 MPI Data Distribution Functions

Chris@10: Chris@10:

As described above (see MPI Data Distribution), in order to Chris@10: allocate your arrays, before creating a plan, you must first Chris@10: call one of the following routines to determine the required Chris@10: allocation size and the portion of the array locally stored on a given Chris@10: process. The MPI_Comm communicator passed here must be Chris@10: equivalent to the communicator used below for plan creation. Chris@10: Chris@10:

The basic interface for multidimensional transforms consists of the Chris@10: functions: Chris@10: Chris@10:

Chris@10:

     ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@10:                                       ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@10:      ptrdiff_t fftw_mpi_local_size_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@10:                                       MPI_Comm comm,
Chris@10:                                       ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@10:      ptrdiff_t fftw_mpi_local_size(int rnk, const ptrdiff_t *n, MPI_Comm comm,
Chris@10:                                    ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@10:      
Chris@10:      ptrdiff_t fftw_mpi_local_size_2d_transposed(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@10:                                                  ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@10:                                                  ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@10:      ptrdiff_t fftw_mpi_local_size_3d_transposed(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@10:                                                  MPI_Comm comm,
Chris@10:                                                  ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@10:                                                  ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@10:      ptrdiff_t fftw_mpi_local_size_transposed(int rnk, const ptrdiff_t *n, MPI_Comm comm,
Chris@10:                                               ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@10:                                               ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@10: 
Chris@10:

These functions return the number of elements to allocate (complex Chris@10: numbers for DFT/r2c/c2r plans, real numbers for r2r plans), whereas Chris@10: the local_n0 and local_0_start return the portion Chris@10: (local_0_start to local_0_start + local_n0 - 1) of the Chris@10: first dimension of an n0 × n1 × n2 × … × nd-1 array that is stored on the local Chris@10: process. See Basic and advanced distribution interfaces. For Chris@10: FFTW_MPI_TRANSPOSED_OUT plans, the ‘_transposed’ variants Chris@10: are useful in order to also return the local portion of the first Chris@10: dimension in the n1 × n0 × n2 ×…× nd-1 transposed output. See Transposed distributions. The advanced interface for multidimensional Chris@10: transforms is: Chris@10: Chris@10:

Chris@10:

     ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, ptrdiff_t howmany,
Chris@10:                                         ptrdiff_t block0, MPI_Comm comm,
Chris@10:                                         ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@10:      ptrdiff_t fftw_mpi_local_size_many_transposed(int rnk, const ptrdiff_t *n, ptrdiff_t howmany,
Chris@10:                                                    ptrdiff_t block0, ptrdiff_t block1, MPI_Comm comm,
Chris@10:                                                    ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@10:                                                    ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@10: 
Chris@10:

These differ from the basic interface in only two ways. First, they Chris@10: allow you to specify block sizes block0 and block1 (the Chris@10: latter for the transposed output); you can pass Chris@10: FFTW_MPI_DEFAULT_BLOCK to use FFTW's default block size as in Chris@10: the basic interface. Second, you can pass a howmany parameter, Chris@10: corresponding to the advanced planning interface below: this is for Chris@10: transforms of contiguous howmany-tuples of numbers Chris@10: (howmany = 1 in the basic interface). Chris@10: Chris@10:

The corresponding basic and advanced routines for one-dimensional Chris@10: transforms (currently only complex DFTs) are: Chris@10: Chris@10:

Chris@10:

     ptrdiff_t fftw_mpi_local_size_1d(
Chris@10:                   ptrdiff_t n0, MPI_Comm comm, int sign, unsigned flags,
Chris@10:                   ptrdiff_t *local_ni, ptrdiff_t *local_i_start,
Chris@10:                   ptrdiff_t *local_no, ptrdiff_t *local_o_start);
Chris@10:      ptrdiff_t fftw_mpi_local_size_many_1d(
Chris@10:                   ptrdiff_t n0, ptrdiff_t howmany,
Chris@10:                   MPI_Comm comm, int sign, unsigned flags,
Chris@10:                   ptrdiff_t *local_ni, ptrdiff_t *local_i_start,
Chris@10:                   ptrdiff_t *local_no, ptrdiff_t *local_o_start);
Chris@10: 
Chris@10:

As above, the return value is the number of elements to allocate Chris@10: (complex numbers, for complex DFTs). The local_ni and Chris@10: local_i_start arguments return the portion Chris@10: (local_i_start to local_i_start + local_ni - 1) of the Chris@10: 1d array that is stored on this process for the transform Chris@10: input, and local_no and local_o_start are the Chris@10: corresponding quantities for the input. The sign Chris@10: (FFTW_FORWARD or FFTW_BACKWARD) and flags must Chris@10: match the arguments passed when creating a plan. Although the inputs Chris@10: and outputs have different data distributions in general, it is Chris@10: guaranteed that the output data distribution of an Chris@10: FFTW_FORWARD plan will match the input data distribution Chris@10: of an FFTW_BACKWARD plan and vice versa; similarly for the Chris@10: FFTW_MPI_SCRAMBLED_OUT and FFTW_MPI_SCRAMBLED_IN flags. Chris@10: See One-dimensional distributions. Chris@10: Chris@10: Chris@10: