Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: FFTW 3.3.8: Using MPI Plans Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:

Chris@82: Next: , Previous: , Up: FFTW MPI Reference   [Contents][Index]

Chris@82:
Chris@82:
Chris@82: Chris@82:

6.12.3 Using MPI Plans

Chris@82: Chris@82:

Once an MPI plan is created, you can execute and destroy it using Chris@82: fftw_execute, fftw_destroy_plan, and the other functions Chris@82: in the serial interface that operate on generic plans (see Using Plans). Chris@82:

Chris@82: Chris@82: Chris@82:

The fftw_execute and fftw_destroy_plan functions, applied to Chris@82: MPI plans, are collective calls: they must be called for all processes Chris@82: in the communicator that was used to create the plan. Chris@82:

Chris@82: Chris@82:

You must not use the serial new-array plan-execution functions Chris@82: fftw_execute_dft and so on (see New-array Execute Functions) with MPI plans. Such functions are specialized to the Chris@82: problem type, and there are specific new-array execute functions for MPI plans: Chris@82:

Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:
void fftw_mpi_execute_dft(fftw_plan p, fftw_complex *in, fftw_complex *out);
Chris@82: void fftw_mpi_execute_dft_r2c(fftw_plan p, double *in, fftw_complex *out);
Chris@82: void fftw_mpi_execute_dft_c2r(fftw_plan p, fftw_complex *in, double *out);
Chris@82: void fftw_mpi_execute_r2r(fftw_plan p, double *in, double *out);
Chris@82: 
Chris@82: Chris@82: Chris@82: Chris@82:

These functions have the same restrictions as those of the serial Chris@82: new-array execute functions. They are always safe to apply to Chris@82: the same in and out arrays that were used to Chris@82: create the plan. They can only be applied to new arrarys if those Chris@82: arrays have the same types, dimensions, in-placeness, and alignment as Chris@82: the original arrays, where the best way to ensure the same alignment Chris@82: is to use FFTW’s fftw_malloc and related allocation functions Chris@82: for all arrays (see Memory Allocation). Note that distributed Chris@82: transposes (see FFTW MPI Transposes) use Chris@82: fftw_mpi_execute_r2r, since they count as rank-zero r2r plans Chris@82: from FFTW’s perspective. Chris@82:

Chris@82:
Chris@82:
Chris@82:

Chris@82: Next: , Previous: , Up: FFTW MPI Reference   [Contents][Index]

Chris@82:
Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: