cannam@127: cannam@127: cannam@127: cannam@127: cannam@127:
cannam@127:cannam@127: Next: MPI Data Distribution, Previous: Linking and Initializing MPI FFTW, Up: Distributed-memory FFTW with MPI [Contents][Index]
cannam@127:Before we document the FFTW MPI interface in detail, we begin with a
cannam@127: simple example outlining how one would perform a two-dimensional
cannam@127: N0
by N1
complex DFT.
cannam@127:
#include <fftw3-mpi.h> cannam@127: cannam@127: int main(int argc, char **argv) cannam@127: { cannam@127: const ptrdiff_t N0 = ..., N1 = ...; cannam@127: fftw_plan plan; cannam@127: fftw_complex *data; cannam@127: ptrdiff_t alloc_local, local_n0, local_0_start, i, j; cannam@127: cannam@127: MPI_Init(&argc, &argv); cannam@127: fftw_mpi_init(); cannam@127: cannam@127: /* get local data size and allocate */ cannam@127: alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD, cannam@127: &local_n0, &local_0_start); cannam@127: data = fftw_alloc_complex(alloc_local); cannam@127: cannam@127: /* create plan for in-place forward DFT */ cannam@127: plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD, cannam@127: FFTW_FORWARD, FFTW_ESTIMATE); cannam@127: cannam@127: /* initialize data to some function my_function(x,y) */ cannam@127: for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j) cannam@127: data[i*N1 + j] = my_function(local_0_start + i, j); cannam@127: cannam@127: /* compute transforms, in-place, as many times as desired */ cannam@127: fftw_execute(plan); cannam@127: cannam@127: fftw_destroy_plan(plan); cannam@127: cannam@127: MPI_Finalize(); cannam@127: } cannam@127:
As can be seen above, the MPI interface follows the same basic style cannam@127: of allocate/plan/execute/destroy as the serial FFTW routines. All of cannam@127: the MPI-specific routines are prefixed with ‘fftw_mpi_’ instead cannam@127: of ‘fftw_’. There are a few important differences, however: cannam@127:
cannam@127:First, we must call fftw_mpi_init()
after calling
cannam@127: MPI_Init
(required in all MPI programs) and before calling any
cannam@127: other ‘fftw_mpi_’ routine.
cannam@127:
cannam@127:
cannam@127:
Second, when we create the plan with fftw_mpi_plan_dft_2d
,
cannam@127: analogous to fftw_plan_dft_2d
, we pass an additional argument:
cannam@127: the communicator, indicating which processes will participate in the
cannam@127: transform (here MPI_COMM_WORLD
, indicating all processes).
cannam@127: Whenever you create, execute, or destroy a plan for an MPI transform,
cannam@127: you must call the corresponding FFTW routine on all processes
cannam@127: in the communicator for that transform. (That is, these are
cannam@127: collective calls.) Note that the plan for the MPI transform
cannam@127: uses the standard fftw_execute
and fftw_destroy
routines
cannam@127: (on the other hand, there are MPI-specific new-array execute functions
cannam@127: documented below).
cannam@127:
cannam@127:
cannam@127:
cannam@127:
Third, all of the FFTW MPI routines take ptrdiff_t
arguments
cannam@127: instead of int
as for the serial FFTW. ptrdiff_t
is a
cannam@127: standard C integer type which is (at least) 32 bits wide on a 32-bit
cannam@127: machine and 64 bits wide on a 64-bit machine. This is to make it easy
cannam@127: to specify very large parallel transforms on a 64-bit machine. (You
cannam@127: can specify 64-bit transform sizes in the serial FFTW, too, but only
cannam@127: by using the ‘guru64’ planner interface. See 64-bit Guru Interface.)
cannam@127:
cannam@127:
cannam@127:
Fourth, and most importantly, you don’t allocate the entire
cannam@127: two-dimensional array on each process. Instead, you call
cannam@127: fftw_mpi_local_size_2d
to find out what portion of the
cannam@127: array resides on each processor, and how much space to allocate.
cannam@127: Here, the portion of the array on each process is a local_n0
by
cannam@127: N1
slice of the total array, starting at index
cannam@127: local_0_start
. The total number of fftw_complex
numbers
cannam@127: to allocate is given by the alloc_local
return value, which
cannam@127: may be greater than local_n0 * N1
(in case some
cannam@127: intermediate calculations require additional storage). The data
cannam@127: distribution in FFTW’s MPI interface is described in more detail by
cannam@127: the next section.
cannam@127:
cannam@127:
cannam@127:
Given the portion of the array that resides on the local process, it
cannam@127: is straightforward to initialize the data (here to a function
cannam@127: myfunction
) and otherwise manipulate it. Of course, at the end
cannam@127: of the program you may want to output the data somehow, but
cannam@127: synchronizing this output is up to you and is beyond the scope of this
cannam@127: manual. (One good way to output a large multi-dimensional distributed
cannam@127: array in MPI to a portable binary file is to use the free HDF5
cannam@127: library; see the HDF home page.)
cannam@127:
cannam@127:
cannam@127:
cannam@127: Next: MPI Data Distribution, Previous: Linking and Initializing MPI FFTW, Up: Distributed-memory FFTW with MPI [Contents][Index]
cannam@127: