Chris@19: Chris@19: Chris@19: Using MPI Plans - FFTW 3.3.4 Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19:
Chris@19: Chris@19:

Chris@19: Next: , Chris@19: Previous: MPI Initialization, Chris@19: Up: FFTW MPI Reference Chris@19:


Chris@19:
Chris@19: Chris@19:

6.12.3 Using MPI Plans

Chris@19: Chris@19:

Once an MPI plan is created, you can execute and destroy it using Chris@19: fftw_execute, fftw_destroy_plan, and the other functions Chris@19: in the serial interface that operate on generic plans (see Using Plans). Chris@19: Chris@19:

The fftw_execute and fftw_destroy_plan functions, applied to Chris@19: MPI plans, are collective calls: they must be called for all processes Chris@19: in the communicator that was used to create the plan. Chris@19: Chris@19:

You must not use the serial new-array plan-execution functions Chris@19: fftw_execute_dft and so on (see New-array Execute Functions) with MPI plans. Such functions are specialized to the Chris@19: problem type, and there are specific new-array execute functions for MPI plans: Chris@19: Chris@19:

Chris@19:

     void fftw_mpi_execute_dft(fftw_plan p, fftw_complex *in, fftw_complex *out);
Chris@19:      void fftw_mpi_execute_dft_r2c(fftw_plan p, double *in, fftw_complex *out);
Chris@19:      void fftw_mpi_execute_dft_c2r(fftw_plan p, fftw_complex *in, double *out);
Chris@19:      void fftw_mpi_execute_r2r(fftw_plan p, double *in, double *out);
Chris@19: 
Chris@19:

These functions have the same restrictions as those of the serial Chris@19: new-array execute functions. They are always safe to apply to Chris@19: the same in and out arrays that were used to Chris@19: create the plan. They can only be applied to new arrarys if those Chris@19: arrays have the same types, dimensions, in-placeness, and alignment as Chris@19: the original arrays, where the best way to ensure the same alignment Chris@19: is to use FFTW's fftw_malloc and related allocation functions Chris@19: for all arrays (see Memory Allocation). Note that distributed Chris@19: transposes (see FFTW MPI Transposes) use Chris@19: fftw_mpi_execute_r2r, since they count as rank-zero r2r plans Chris@19: from FFTW's perspective. Chris@19: Chris@19: Chris@19: