Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: FFTW 3.3.8: Basic and advanced distribution interfaces Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:

Chris@82: Next: , Previous: , Up: MPI Data Distribution   [Contents][Index]

Chris@82:
Chris@82:
Chris@82: Chris@82:

6.4.1 Basic and advanced distribution interfaces

Chris@82: Chris@82:

As with the planner interface, the ‘fftw_mpi_local_size’ Chris@82: distribution interface is broken into basic and advanced Chris@82: (‘_many’) interfaces, where the latter allows you to specify the Chris@82: block size manually and also to request block sizes when computing Chris@82: multiple transforms simultaneously. These functions are documented Chris@82: more exhaustively by the FFTW MPI Reference, but we summarize the Chris@82: basic ideas here using a couple of two-dimensional examples. Chris@82:

Chris@82:

For the 100 × 200 Chris@82: complex-DFT example, above, we would find Chris@82: the distribution by calling the following function in the basic Chris@82: interface: Chris@82:

Chris@82:
Chris@82:
ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@82:                                  ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@82: 
Chris@82: Chris@82: Chris@82:

Given the total size of the data to be transformed (here, n0 = Chris@82: 100 and n1 = 200) and an MPI communicator (comm), this Chris@82: function provides three numbers. Chris@82:

Chris@82:

First, it describes the shape of the local data: the current process Chris@82: should store a local_n0 by n1 slice of the overall Chris@82: dataset, in row-major order (n1 dimension contiguous), starting Chris@82: at index local_0_start. That is, if the total dataset is Chris@82: viewed as a n0 by n1 matrix, the current process should Chris@82: store the rows local_0_start to Chris@82: local_0_start+local_n0-1. Obviously, if you are running with Chris@82: only a single MPI process, that process will store the entire array: Chris@82: local_0_start will be zero and local_n0 will be Chris@82: n0. See Row-major Format. Chris@82: Chris@82:

Chris@82: Chris@82:

Second, the return value is the total number of data elements (e.g., Chris@82: complex numbers for a complex DFT) that should be allocated for the Chris@82: input and output arrays on the current process (ideally with Chris@82: fftw_malloc or an ‘fftw_alloc’ function, to ensure optimal Chris@82: alignment). It might seem that this should always be equal to Chris@82: local_n0 * n1, but this is not the case. FFTW’s Chris@82: distributed FFT algorithms require data redistributions at Chris@82: intermediate stages of the transform, and in some circumstances this Chris@82: may require slightly larger local storage. This is discussed in more Chris@82: detail below, under Load balancing. Chris@82: Chris@82: Chris@82:

Chris@82: Chris@82: Chris@82:

The advanced-interface ‘local_size’ function for multidimensional Chris@82: transforms returns the same three things (local_n0, Chris@82: local_0_start, and the total number of elements to allocate), Chris@82: but takes more inputs: Chris@82:

Chris@82:
Chris@82:
ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n,
Chris@82:                                    ptrdiff_t howmany,
Chris@82:                                    ptrdiff_t block0,
Chris@82:                                    MPI_Comm comm,
Chris@82:                                    ptrdiff_t *local_n0,
Chris@82:                                    ptrdiff_t *local_0_start);
Chris@82: 
Chris@82: Chris@82: Chris@82:

The two-dimensional case above corresponds to rnk = 2 and an Chris@82: array n of length 2 with n[0] = n0 and n[1] = n1. Chris@82: This routine is for any rnk > 1; one-dimensional transforms Chris@82: have their own interface because they work slightly differently, as Chris@82: discussed below. Chris@82:

Chris@82:

First, the advanced interface allows you to perform multiple Chris@82: transforms at once, of interleaved data, as specified by the Chris@82: howmany parameter. (hoamany is 1 for a single Chris@82: transform.) Chris@82:

Chris@82:

Second, here you can specify your desired block size in the n0 Chris@82: dimension, block0. To use FFTW’s default block size, pass Chris@82: FFTW_MPI_DEFAULT_BLOCK (0) for block0. Otherwise, on Chris@82: P processes, FFTW will return local_n0 equal to Chris@82: block0 on the first P / block0 processes (rounded down), Chris@82: return local_n0 equal to n0 - block0 * (P / block0) on Chris@82: the next process, and local_n0 equal to zero on any remaining Chris@82: processes. In general, we recommend using the default block size Chris@82: (which corresponds to n0 / P, rounded up). Chris@82: Chris@82: Chris@82:

Chris@82: Chris@82:

For example, suppose you have P = 4 processes and n0 = Chris@82: 21. The default will be a block size of 6, which will give Chris@82: local_n0 = 6 on the first three processes and local_n0 = Chris@82: 3 on the last process. Instead, however, you could specify Chris@82: block0 = 5 if you wanted, which would give local_n0 = 5 Chris@82: on processes 0 to 2, local_n0 = 6 on process 3. (This choice, Chris@82: while it may look superficially more “balanced,” has the same Chris@82: critical path as FFTW’s default but requires more communications.) Chris@82:

Chris@82:
Chris@82:
Chris@82:

Chris@82: Next: , Previous: , Up: MPI Data Distribution   [Contents][Index]

Chris@82:
Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: