Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: FFTW 3.3.8: Avoiding MPI Deadlocks Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:

Chris@82: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

Chris@82:
Chris@82:
Chris@82: Chris@82:

6.9 Avoiding MPI Deadlocks

Chris@82: Chris@82: Chris@82:

An MPI program can deadlock if one process is waiting for a Chris@82: message from another process that never gets sent. To avoid deadlocks Chris@82: when using FFTW’s MPI routines, it is important to know which Chris@82: functions are collective: that is, which functions must Chris@82: always be called in the same order from every Chris@82: process in a given communicator. (For example, MPI_Barrier is Chris@82: the canonical example of a collective function in the MPI standard.) Chris@82: Chris@82: Chris@82:

Chris@82: Chris@82:

The functions in FFTW that are always collective are: every Chris@82: function beginning with ‘fftw_mpi_plan’, as well as Chris@82: fftw_mpi_broadcast_wisdom and fftw_mpi_gather_wisdom. Chris@82: Also, the following functions from the ordinary FFTW interface are Chris@82: collective when they are applied to a plan created by an Chris@82: ‘fftw_mpi_plan’ function: fftw_execute, Chris@82: fftw_destroy_plan, and fftw_flops. Chris@82: Chris@82: Chris@82: Chris@82:

Chris@82: Chris@82: Chris@82: Chris@82: Chris@82: