Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:Chris@42: Next: FFTW MPI Performance Tips, Previous: FFTW MPI Wisdom, Up: Distributed-memory FFTW with MPI [Contents][Index]
Chris@42:An MPI program can deadlock if one process is waiting for a
Chris@42: message from another process that never gets sent.  To avoid deadlocks
Chris@42: when using FFTW’s MPI routines, it is important to know which
Chris@42: functions are collective: that is, which functions must
Chris@42: always be called in the same order from every
Chris@42: process in a given communicator.  (For example, MPI_Barrier is
Chris@42: the canonical example of a collective function in the MPI standard.)
Chris@42: 
Chris@42: 
Chris@42: 
The functions in FFTW that are always collective are: every
Chris@42: function beginning with ‘fftw_mpi_plan’, as well as
Chris@42: fftw_mpi_broadcast_wisdom and fftw_mpi_gather_wisdom.
Chris@42: Also, the following functions from the ordinary FFTW interface are
Chris@42: collective when they are applied to a plan created by an
Chris@42: ‘fftw_mpi_plan’ function: fftw_execute,
Chris@42: fftw_destroy_plan, and fftw_flops.
Chris@42: 
Chris@42: 
Chris@42: 
Chris@42: