Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: FFTW 3.3.5: Using MPI Plans Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:

Chris@42: Next: , Previous: , Up: FFTW MPI Reference   [Contents][Index]

Chris@42:
Chris@42:
Chris@42: Chris@42:

6.12.3 Using MPI Plans

Chris@42: Chris@42:

Once an MPI plan is created, you can execute and destroy it using Chris@42: fftw_execute, fftw_destroy_plan, and the other functions Chris@42: in the serial interface that operate on generic plans (see Using Plans). Chris@42:

Chris@42: Chris@42: Chris@42:

The fftw_execute and fftw_destroy_plan functions, applied to Chris@42: MPI plans, are collective calls: they must be called for all processes Chris@42: in the communicator that was used to create the plan. Chris@42:

Chris@42: Chris@42:

You must not use the serial new-array plan-execution functions Chris@42: fftw_execute_dft and so on (see New-array Execute Functions) with MPI plans. Such functions are specialized to the Chris@42: problem type, and there are specific new-array execute functions for MPI plans: Chris@42:

Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:
void fftw_mpi_execute_dft(fftw_plan p, fftw_complex *in, fftw_complex *out);
Chris@42: void fftw_mpi_execute_dft_r2c(fftw_plan p, double *in, fftw_complex *out);
Chris@42: void fftw_mpi_execute_dft_c2r(fftw_plan p, fftw_complex *in, double *out);
Chris@42: void fftw_mpi_execute_r2r(fftw_plan p, double *in, double *out);
Chris@42: 
Chris@42: Chris@42: Chris@42: Chris@42:

These functions have the same restrictions as those of the serial Chris@42: new-array execute functions. They are always safe to apply to Chris@42: the same in and out arrays that were used to Chris@42: create the plan. They can only be applied to new arrarys if those Chris@42: arrays have the same types, dimensions, in-placeness, and alignment as Chris@42: the original arrays, where the best way to ensure the same alignment Chris@42: is to use FFTW’s fftw_malloc and related allocation functions Chris@42: for all arrays (see Memory Allocation). Note that distributed Chris@42: transposes (see FFTW MPI Transposes) use Chris@42: fftw_mpi_execute_r2r, since they count as rank-zero r2r plans Chris@42: from FFTW’s perspective. Chris@42:

Chris@42:
Chris@42:
Chris@42:

Chris@42: Next: , Previous: , Up: FFTW MPI Reference   [Contents][Index]

Chris@42:
Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: