Chris@82: Chris@82: Chris@82: Chris@82: Chris@82:
Chris@82:Chris@82: Next: Avoiding MPI Deadlocks, Previous: FFTW MPI Transposes, Up: Distributed-memory FFTW with MPI [Contents][Index]
Chris@82:FFTW’s “wisdom” facility (see Words of Wisdom-Saving Plans) can Chris@82: be used to save MPI plans as well as to save uniprocessor plans. Chris@82: However, for MPI there are several unavoidable complications. Chris@82:
Chris@82: Chris@82:First, the MPI standard does not guarantee that every process can Chris@82: perform file I/O (at least, not using C stdio routines)—in general, Chris@82: we may only assume that process 0 is capable of I/O.7 So, if we Chris@82: want to export the wisdom from a single process to a file, we must Chris@82: first export the wisdom to a string, then send it to process 0, then Chris@82: write it to a file. Chris@82:
Chris@82:Second, in principle we may want to have separate wisdom for every Chris@82: process, since in general the processes may run on different hardware Chris@82: even for a single MPI program. However, in practice FFTW’s MPI code Chris@82: is designed for the case of homogeneous hardware (see Load balancing), and in this case it is convenient to use the same wisdom Chris@82: for every process. Thus, we need a mechanism to synchronize the wisdom. Chris@82:
Chris@82:To address both of these problems, FFTW provides the following two Chris@82: functions: Chris@82:
Chris@82:void fftw_mpi_broadcast_wisdom(MPI_Comm comm); Chris@82: void fftw_mpi_gather_wisdom(MPI_Comm comm); Chris@82:
Given a communicator comm
, fftw_mpi_broadcast_wisdom
Chris@82: will broadcast the wisdom from process 0 to all other processes.
Chris@82: Conversely, fftw_mpi_gather_wisdom
will collect wisdom from all
Chris@82: processes onto process 0. (If the plans created for the same problem
Chris@82: by different processes are not the same, fftw_mpi_gather_wisdom
Chris@82: will arbitrarily choose one of the plans.) Both of these functions
Chris@82: may result in suboptimal plans for different processes if the
Chris@82: processes are running on non-identical hardware. Both of these
Chris@82: functions are collective calls, which means that they must be
Chris@82: executed by all processes in the communicator.
Chris@82:
Chris@82:
So, for example, a typical code snippet to import wisdom from a file Chris@82: and use it on all processes would be: Chris@82:
Chris@82:{ Chris@82: int rank; Chris@82: Chris@82: fftw_mpi_init(); Chris@82: MPI_Comm_rank(MPI_COMM_WORLD, &rank); Chris@82: if (rank == 0) fftw_import_wisdom_from_filename("mywisdom"); Chris@82: fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD); Chris@82: } Chris@82:
(Note that we must call fftw_mpi_init
before importing any
Chris@82: wisdom that might contain MPI plans.) Similarly, a typical code
Chris@82: snippet to export wisdom from all processes to a file is:
Chris@82:
Chris@82:
{ Chris@82: int rank; Chris@82: Chris@82: fftw_mpi_gather_wisdom(MPI_COMM_WORLD); Chris@82: MPI_Comm_rank(MPI_COMM_WORLD, &rank); Chris@82: if (rank == 0) fftw_export_wisdom_to_filename("mywisdom"); Chris@82: } Chris@82:
In fact,
Chris@82: even this assumption is not technically guaranteed by the standard,
Chris@82: although it seems to be universal in actual MPI implementations and is
Chris@82: widely assumed by MPI-using software. Technically, you need to query
Chris@82: the MPI_IO
attribute of MPI_COMM_WORLD
with
Chris@82: MPI_Attr_get
. If this attribute is MPI_PROC_NULL
, no
Chris@82: I/O is possible. If it is MPI_ANY_SOURCE
, any process can
Chris@82: perform I/O. Otherwise, it is the rank of a process that can perform
Chris@82: I/O ... but since it is not guaranteed to yield the same rank
Chris@82: on all processes, you have to do an MPI_Allreduce
of some kind
Chris@82: if you want all processes to agree about which is going to do I/O.
Chris@82: And even then, the standard only guarantees that this process can
Chris@82: perform output, but not input. See e.g. Parallel Programming
Chris@82: with MPI by P. S. Pacheco, section 8.1.3. Needless to say, in our
Chris@82: experience virtually no MPI programmers worry about this.
Chris@82: Next: Avoiding MPI Deadlocks, Previous: FFTW MPI Transposes, Up: Distributed-memory FFTW with MPI [Contents][Index]
Chris@82: