Chris@19: Chris@19:
Chris@19:Chris@19: Next: 2d MPI example, Chris@19: Previous: FFTW MPI Installation, Chris@19: Up: Distributed-memory FFTW with MPI Chris@19:
Programs using the MPI FFTW routines should be linked with
Chris@19: -lfftw3_mpi -lfftw3 -lm
on Unix in double precision,
Chris@19: -lfftw3f_mpi -lfftw3f -lm
in single precision, and so on
Chris@19: (see Precision). You will also need to link with whatever library
Chris@19: is responsible for MPI on your system; in most MPI implementations,
Chris@19: there is a special compiler alias named mpicc
to compile and
Chris@19: link MPI code.
Chris@19:
Chris@19:
Chris@19:
Before calling any FFTW routines except possibly
Chris@19: fftw_init_threads
(see Combining MPI and Threads), but after calling
Chris@19: MPI_Init
, you should call the function:
Chris@19:
Chris@19:
void fftw_mpi_init(void); Chris@19:Chris@19:
Chris@19: If, at the end of your program, you want to get rid of all memory and Chris@19: other resources allocated internally by FFTW, for both the serial and Chris@19: MPI routines, you can call: Chris@19: Chris@19:
void fftw_mpi_cleanup(void); Chris@19:Chris@19:
Chris@19: which is much like the fftw_cleanup()
function except that it
Chris@19: also gets rid of FFTW's MPI-related data. You must not execute
Chris@19: any previously created plans after calling this function.
Chris@19:
Chris@19:
Chris@19:
Chris@19: