d@0: d@0:
d@0:d@0: d@0: Next: MPI data distribution, d@0: Previous: Linking and Initializing MPI FFTW, d@0: Up: Distributed-memory FFTW with MPI d@0:
Before we document the FFTW MPI interface in detail, we begin with a
d@0: simple example outlining how one would perform a two-dimensional
d@0: N0
by N1
complex DFT.
d@0:
d@0:
#include <fftw3-mpi.h> d@0: d@0: int main(int argc, char **argv) d@0: { d@0: const ptrdiff_t N0 = ..., N1 = ...; d@0: fftw_plan plan; d@0: fftw_complex *data; d@0: ptrdiff_t alloc_local, local_n0, local_0_start, i, j; d@0: d@0: MPI_Init(&argc, &argv); d@0: fftw_mpi_init(); d@0: d@0: /* get local data size and allocate */ d@0: alloc_local = fftw3_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD, d@0: &local_n0, &local_0_start); d@0: data = (fftw_complex *) fftw_malloc(sizeof(fftw_complex) * alloc_local); d@0: d@0: /* create plan for forward DFT */ d@0: plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD, d@0: FFTW_FORWARD, FFTW_ESTIMATE); d@0: d@0: /* initialize data to some function my_function(x,y) */ d@0: for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j) d@0: data[i*N1 + j] = my_function(local_0_start + i, j); d@0: d@0: /* compute transforms, in-place, as many times as desired */ d@0: fftw_execute(plan); d@0: d@0: fftw_destroy_plan(plan); d@0: d@0: MPI_Finalize(); d@0: } d@0:d@0:
As can be seen above, the MPI interface follows the same basic style d@0: of allocate/plan/execute/destroy as the serial FFTW routines. All of d@0: the MPI-specific routines are prefixed with `fftw_mpi_' instead d@0: of `fftw_'. There are a few important differences, however: d@0: d@0:
First, we must call fftw_mpi_init()
after calling
d@0: MPI_Init
(required in all MPI programs) and before calling any
d@0: other `fftw_mpi_' routine.
d@0:
d@0: Second, when we create the plan with fftw_mpi_plan_dft_2d
,
d@0: analogous to fftw_plan_dft_2d
, we pass an additional argument:
d@0: the communicator, indicating which processes will participate in the
d@0: transform (here MPI_COMM_WORLD
, indicating all processes).
d@0: Whenever you create, execute, or destroy a plan for an MPI transform,
d@0: you must call the corresponding FFTW routine on all processes
d@0: in the communicator for that transform. (That is, these are
d@0: collective calls.) Note that the plan for the MPI transform
d@0: uses the standard fftw_execute
and fftw_destroy
d@0: routines (the new-array execute routines also work).
d@0:
d@0: Third, all of the FFTW MPI routines take ptrdiff_t
arguments
d@0: instead of int
as for the serial FFTW. ptrdiff_t
is a
d@0: standard C integer type which is (at least) 32 bits wide on a 32-bit
d@0: machine and 64 bits wide on a 64-bit machine. This is to make it easy
d@0: to specify very large parallel transforms on a 64-bit machine. (You
d@0: can specify 64-bit transform sizes in the serial FFTW, too, but only
d@0: by using the `guru64' planner interface. See 64-bit Guru Interface.)
d@0:
d@0: Fourth, and most importantly, you don't allocate the entire
d@0: two-dimensional array on each process. Instead, you call
d@0: fftw_mpi_local_size_2d
to find out what portion
of the
d@0: array resides on each processor, and how much space to allocate.
d@0: Here, the portion of the array on each process is a local_n0
by
d@0: N1
slice of the total array, starting at index
d@0: local_0_start
. The total number of fftw_complex
numbers
d@0: to allocate is given by the alloc_local
return value, which
d@0: may be greater than local_n0 * N1
(in case some
d@0: intermediate calculations require additional storage). The data
d@0: distribution in FFTW's MPI interface is described in more detail by
d@0: the next section.
d@0:
d@0: Given the portion of the array that resides on the local process, it
d@0: is straightforward to initialize the data (here to a function
d@0: myfunction
) and otherwise manipulate it. Of course, at the end
d@0: of the program you may want to output the data somehow, but
d@0: synchronizing this output is up to you and is beyond the scope of this
d@0: manual. (One good way to output a large multi-dimensional distributed
d@0: array in MPI to a portable binary file is to use the free HDF5
d@0: library; see the HDF home page.)
d@0:
d@0:
d@0:
d@0:
d@0: