cannam@95: cannam@95:
cannam@95:cannam@95: Next: MPI Data Distribution, cannam@95: Previous: Linking and Initializing MPI FFTW, cannam@95: Up: Distributed-memory FFTW with MPI cannam@95:
Before we document the FFTW MPI interface in detail, we begin with a
cannam@95: simple example outlining how one would perform a two-dimensional
cannam@95: N0 by N1 complex DFT.
cannam@95: 
cannam@95: 
     #include <fftw3-mpi.h>
cannam@95:      
cannam@95:      int main(int argc, char **argv)
cannam@95:      {
cannam@95:          const ptrdiff_t N0 = ..., N1 = ...;
cannam@95:          fftw_plan plan;
cannam@95:          fftw_complex *data;
cannam@95:          ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
cannam@95:      
cannam@95:          MPI_Init(&argc, &argv);
cannam@95:          fftw_mpi_init();
cannam@95:      
cannam@95:          /* get local data size and allocate */
cannam@95:          alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
cannam@95:                                               &local_n0, &local_0_start);
cannam@95:          data = fftw_alloc_complex(alloc_local);
cannam@95:      
cannam@95:          /* create plan for in-place forward DFT */
cannam@95:          plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
cannam@95:                                      FFTW_FORWARD, FFTW_ESTIMATE);
cannam@95:      
cannam@95:          /* initialize data to some function my_function(x,y) */
cannam@95:          for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j)
cannam@95:             data[i*N1 + j] = my_function(local_0_start + i, j);
cannam@95:      
cannam@95:          /* compute transforms, in-place, as many times as desired */
cannam@95:          fftw_execute(plan);
cannam@95:      
cannam@95:          fftw_destroy_plan(plan);
cannam@95:      
cannam@95:          MPI_Finalize();
cannam@95:      }
cannam@95: 
cannam@95:    As can be seen above, the MPI interface follows the same basic style cannam@95: of allocate/plan/execute/destroy as the serial FFTW routines. All of cannam@95: the MPI-specific routines are prefixed with ‘fftw_mpi_’ instead cannam@95: of ‘fftw_’. There are a few important differences, however: cannam@95: cannam@95:
First, we must call fftw_mpi_init() after calling
cannam@95: MPI_Init (required in all MPI programs) and before calling any
cannam@95: other ‘fftw_mpi_’ routine. 
cannam@95: 
cannam@95: 
cannam@95:    
Second, when we create the plan with fftw_mpi_plan_dft_2d,
cannam@95: analogous to fftw_plan_dft_2d, we pass an additional argument:
cannam@95: the communicator, indicating which processes will participate in the
cannam@95: transform (here MPI_COMM_WORLD, indicating all processes). 
cannam@95: Whenever you create, execute, or destroy a plan for an MPI transform,
cannam@95: you must call the corresponding FFTW routine on all processes
cannam@95: in the communicator for that transform.  (That is, these are
cannam@95: collective calls.)  Note that the plan for the MPI transform
cannam@95: uses the standard fftw_execute and fftw_destroy routines
cannam@95: (on the other hand, there are MPI-specific new-array execute functions
cannam@95: documented below). 
cannam@95: 
cannam@95: 
cannam@95:    
Third, all of the FFTW MPI routines take ptrdiff_t arguments
cannam@95: instead of int as for the serial FFTW.  ptrdiff_t is a
cannam@95: standard C integer type which is (at least) 32 bits wide on a 32-bit
cannam@95: machine and 64 bits wide on a 64-bit machine.  This is to make it easy
cannam@95: to specify very large parallel transforms on a 64-bit machine.  (You
cannam@95: can specify 64-bit transform sizes in the serial FFTW, too, but only
cannam@95: by using the ‘guru64’ planner interface.  See 64-bit Guru Interface.) 
cannam@95: 
cannam@95: 
cannam@95:    
Fourth, and most importantly, you don't allocate the entire
cannam@95: two-dimensional array on each process.  Instead, you call
cannam@95: fftw_mpi_local_size_2d to find out what portion of the
cannam@95: array resides on each processor, and how much space to allocate. 
cannam@95: Here, the portion of the array on each process is a local_n0 by
cannam@95: N1 slice of the total array, starting at index
cannam@95: local_0_start.  The total number of fftw_complex numbers
cannam@95: to allocate is given by the alloc_local return value, which
cannam@95: may be greater than local_n0 * N1 (in case some
cannam@95: intermediate calculations require additional storage).  The data
cannam@95: distribution in FFTW's MPI interface is described in more detail by
cannam@95: the next section. 
cannam@95: 
cannam@95: 
cannam@95:    
Given the portion of the array that resides on the local process, it
cannam@95: is straightforward to initialize the data (here to a function
cannam@95: myfunction) and otherwise manipulate it.  Of course, at the end
cannam@95: of the program you may want to output the data somehow, but
cannam@95: synchronizing this output is up to you and is beyond the scope of this
cannam@95: manual.  (One good way to output a large multi-dimensional distributed
cannam@95: array in MPI to a portable binary file is to use the free HDF5
cannam@95: library; see the HDF home page.) 
cannam@95: 
cannam@95: 
cannam@95: 
cannam@95:    
cannam@95: