Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:Chris@82: Next: 2d MPI example, Previous: FFTW MPI Installation, Up: Distributed-memory FFTW with MPI [Contents][Index]
Chris@82:Programs using the MPI FFTW routines should be linked with
Chris@82: -lfftw3_mpi -lfftw3 -lm
on Unix in double precision,
Chris@82: -lfftw3f_mpi -lfftw3f -lm
in single precision, and so on
Chris@82: (see Precision). You will also need to link with whatever library
Chris@82: is responsible for MPI on your system; in most MPI implementations,
Chris@82: there is a special compiler alias named mpicc
to compile and
Chris@82: link MPI code.
Chris@82:
Chris@82:
Chris@82:
Chris@82:
Before calling any FFTW routines except possibly
Chris@82: fftw_init_threads
(see Combining MPI and Threads), but after calling
Chris@82: MPI_Init
, you should call the function:
Chris@82:
void fftw_mpi_init(void); Chris@82:
If, at the end of your program, you want to get rid of all memory and Chris@82: other resources allocated internally by FFTW, for both the serial and Chris@82: MPI routines, you can call: Chris@82:
Chris@82:void fftw_mpi_cleanup(void); Chris@82:
which is much like the fftw_cleanup()
function except that it
Chris@82: also gets rid of FFTW’s MPI-related data. You must not execute
Chris@82: any previously created plans after calling this function.
Chris@82: