d@0: d@0: d@0: Avoiding MPI Deadlocks - FFTW 3.2alpha3 d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0: d@0:
d@0:

d@0: d@0: Next: , d@0: Previous: FFTW MPI Wisdom, d@0: Up: Distributed-memory FFTW with MPI d@0:


d@0:
d@0: d@0:

6.9 Avoiding MPI Deadlocks

d@0: d@0:

d@0: An MPI program can deadlock if one process is waiting for a d@0: message from another process that never gets sent. To avoid deadlocks d@0: when using FFTW's MPI routines, it is important to know which d@0: functions are collective: that is, which functions must d@0: always be called in the same order from every d@0: process in a given communicator. (For example, MPI_Barrier is d@0: the canonical example of a collective function in the MPI standard.) d@0: d@0: The functions in FFTW that are always collective are: every d@0: function beginning with `fftw_mpi_plan', as well as d@0: fftw_mpi_broadcast_wisdom and fftw_mpi_gather_wisdom. d@0: Also, the following functions from the ordinary FFTW interface are d@0: collective when they are applied to a plan created by an d@0: `fftw_mpi_plan' function: fftw_execute, d@0: fftw_destroy_plan, and fftw_flops. d@0: d@0: d@0: d@0: d@0: