Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:Chris@42: Next: 2d MPI example, Previous: FFTW MPI Installation, Up: Distributed-memory FFTW with MPI [Contents][Index]
Chris@42:Programs using the MPI FFTW routines should be linked with
Chris@42: -lfftw3_mpi -lfftw3 -lm
on Unix in double precision,
Chris@42: -lfftw3f_mpi -lfftw3f -lm
in single precision, and so on
Chris@42: (see Precision). You will also need to link with whatever library
Chris@42: is responsible for MPI on your system; in most MPI implementations,
Chris@42: there is a special compiler alias named mpicc
to compile and
Chris@42: link MPI code.
Chris@42:
Chris@42:
Chris@42:
Chris@42:
Before calling any FFTW routines except possibly
Chris@42: fftw_init_threads
(see Combining MPI and Threads), but after calling
Chris@42: MPI_Init
, you should call the function:
Chris@42:
void fftw_mpi_init(void); Chris@42:
If, at the end of your program, you want to get rid of all memory and Chris@42: other resources allocated internally by FFTW, for both the serial and Chris@42: MPI routines, you can call: Chris@42:
Chris@42:void fftw_mpi_cleanup(void); Chris@42:
which is much like the fftw_cleanup()
function except that it
Chris@42: also gets rid of FFTW’s MPI-related data. You must not execute
Chris@42: any previously created plans after calling this function.
Chris@42: