cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: FFTW 3.3.8: 2d MPI example cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167:
cannam@167:

cannam@167: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@167:
cannam@167:
cannam@167: cannam@167:

6.3 2d MPI example

cannam@167: cannam@167:

Before we document the FFTW MPI interface in detail, we begin with a cannam@167: simple example outlining how one would perform a two-dimensional cannam@167: N0 by N1 complex DFT. cannam@167:

cannam@167:
cannam@167:
#include <fftw3-mpi.h>
cannam@167: 
cannam@167: int main(int argc, char **argv)
cannam@167: {
cannam@167:     const ptrdiff_t N0 = ..., N1 = ...;
cannam@167:     fftw_plan plan;
cannam@167:     fftw_complex *data;
cannam@167:     ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
cannam@167: 
cannam@167:     MPI_Init(&argc, &argv);
cannam@167:     fftw_mpi_init();
cannam@167: 
cannam@167:     /* get local data size and allocate */
cannam@167:     alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
cannam@167:                                          &local_n0, &local_0_start);
cannam@167:     data = fftw_alloc_complex(alloc_local);
cannam@167: 
cannam@167:     /* create plan for in-place forward DFT */
cannam@167:     plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
cannam@167:                                 FFTW_FORWARD, FFTW_ESTIMATE);    
cannam@167: 
cannam@167:     /* initialize data to some function my_function(x,y) */
cannam@167:     for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j)
cannam@167:        data[i*N1 + j] = my_function(local_0_start + i, j);
cannam@167: 
cannam@167:     /* compute transforms, in-place, as many times as desired */
cannam@167:     fftw_execute(plan);
cannam@167: 
cannam@167:     fftw_destroy_plan(plan);
cannam@167: 
cannam@167:     MPI_Finalize();
cannam@167: }
cannam@167: 
cannam@167: cannam@167:

As can be seen above, the MPI interface follows the same basic style cannam@167: of allocate/plan/execute/destroy as the serial FFTW routines. All of cannam@167: the MPI-specific routines are prefixed with ‘fftw_mpi_’ instead cannam@167: of ‘fftw_’. There are a few important differences, however: cannam@167:

cannam@167:

First, we must call fftw_mpi_init() after calling cannam@167: MPI_Init (required in all MPI programs) and before calling any cannam@167: other ‘fftw_mpi_’ routine. cannam@167: cannam@167: cannam@167:

cannam@167: cannam@167:

Second, when we create the plan with fftw_mpi_plan_dft_2d, cannam@167: analogous to fftw_plan_dft_2d, we pass an additional argument: cannam@167: the communicator, indicating which processes will participate in the cannam@167: transform (here MPI_COMM_WORLD, indicating all processes). cannam@167: Whenever you create, execute, or destroy a plan for an MPI transform, cannam@167: you must call the corresponding FFTW routine on all processes cannam@167: in the communicator for that transform. (That is, these are cannam@167: collective calls.) Note that the plan for the MPI transform cannam@167: uses the standard fftw_execute and fftw_destroy routines cannam@167: (on the other hand, there are MPI-specific new-array execute functions cannam@167: documented below). cannam@167: cannam@167: cannam@167: cannam@167:

cannam@167: cannam@167:

Third, all of the FFTW MPI routines take ptrdiff_t arguments cannam@167: instead of int as for the serial FFTW. ptrdiff_t is a cannam@167: standard C integer type which is (at least) 32 bits wide on a 32-bit cannam@167: machine and 64 bits wide on a 64-bit machine. This is to make it easy cannam@167: to specify very large parallel transforms on a 64-bit machine. (You cannam@167: can specify 64-bit transform sizes in the serial FFTW, too, but only cannam@167: by using the ‘guru64’ planner interface. See 64-bit Guru Interface.) cannam@167: cannam@167: cannam@167:

cannam@167: cannam@167:

Fourth, and most importantly, you don’t allocate the entire cannam@167: two-dimensional array on each process. Instead, you call cannam@167: fftw_mpi_local_size_2d to find out what portion of the cannam@167: array resides on each processor, and how much space to allocate. cannam@167: Here, the portion of the array on each process is a local_n0 by cannam@167: N1 slice of the total array, starting at index cannam@167: local_0_start. The total number of fftw_complex numbers cannam@167: to allocate is given by the alloc_local return value, which cannam@167: may be greater than local_n0 * N1 (in case some cannam@167: intermediate calculations require additional storage). The data cannam@167: distribution in FFTW’s MPI interface is described in more detail by cannam@167: the next section. cannam@167: cannam@167: cannam@167:

cannam@167: cannam@167:

Given the portion of the array that resides on the local process, it cannam@167: is straightforward to initialize the data (here to a function cannam@167: myfunction) and otherwise manipulate it. Of course, at the end cannam@167: of the program you may want to output the data somehow, but cannam@167: synchronizing this output is up to you and is beyond the scope of this cannam@167: manual. (One good way to output a large multi-dimensional distributed cannam@167: array in MPI to a portable binary file is to use the free HDF5 cannam@167: library; see the HDF home page.) cannam@167: cannam@167: cannam@167:

cannam@167:
cannam@167:
cannam@167:

cannam@167: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@167:
cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: