Chris@10: Chris@10:
Chris@10:Chris@10: Next: FFTW MPI Performance Tips, Chris@10: Previous: FFTW MPI Wisdom, Chris@10: Up: Distributed-memory FFTW with MPI Chris@10:
Chris@10: An MPI program can deadlock if one process is waiting for a
Chris@10: message from another process that never gets sent. To avoid deadlocks
Chris@10: when using FFTW's MPI routines, it is important to know which
Chris@10: functions are collective: that is, which functions must
Chris@10: always be called in the same order from every
Chris@10: process in a given communicator. (For example, MPI_Barrier
is
Chris@10: the canonical example of a collective function in the MPI standard.)
Chris@10:
Chris@10:
Chris@10:
The functions in FFTW that are always collective are: every
Chris@10: function beginning with ‘fftw_mpi_plan’, as well as
Chris@10: fftw_mpi_broadcast_wisdom
and fftw_mpi_gather_wisdom
.
Chris@10: Also, the following functions from the ordinary FFTW interface are
Chris@10: collective when they are applied to a plan created by an
Chris@10: ‘fftw_mpi_plan’ function: fftw_execute
,
Chris@10: fftw_destroy_plan
, and fftw_flops
.
Chris@10:
Chris@10:
Chris@10:
Chris@10:
Chris@10: