cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: FFTW 3.3.8: FFTW MPI Wisdom cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: cannam@167:
cannam@167:

cannam@167: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@167:
cannam@167:
cannam@167: cannam@167:

6.8 FFTW MPI Wisdom

cannam@167: cannam@167: cannam@167: cannam@167:

FFTW’s “wisdom” facility (see Words of Wisdom-Saving Plans) can cannam@167: be used to save MPI plans as well as to save uniprocessor plans. cannam@167: However, for MPI there are several unavoidable complications. cannam@167:

cannam@167: cannam@167:

First, the MPI standard does not guarantee that every process can cannam@167: perform file I/O (at least, not using C stdio routines)—in general, cannam@167: we may only assume that process 0 is capable of I/O.7 So, if we cannam@167: want to export the wisdom from a single process to a file, we must cannam@167: first export the wisdom to a string, then send it to process 0, then cannam@167: write it to a file. cannam@167:

cannam@167:

Second, in principle we may want to have separate wisdom for every cannam@167: process, since in general the processes may run on different hardware cannam@167: even for a single MPI program. However, in practice FFTW’s MPI code cannam@167: is designed for the case of homogeneous hardware (see Load balancing), and in this case it is convenient to use the same wisdom cannam@167: for every process. Thus, we need a mechanism to synchronize the wisdom. cannam@167:

cannam@167:

To address both of these problems, FFTW provides the following two cannam@167: functions: cannam@167:

cannam@167:
cannam@167:
void fftw_mpi_broadcast_wisdom(MPI_Comm comm);
cannam@167: void fftw_mpi_gather_wisdom(MPI_Comm comm);
cannam@167: 
cannam@167: cannam@167: cannam@167: cannam@167:

Given a communicator comm, fftw_mpi_broadcast_wisdom cannam@167: will broadcast the wisdom from process 0 to all other processes. cannam@167: Conversely, fftw_mpi_gather_wisdom will collect wisdom from all cannam@167: processes onto process 0. (If the plans created for the same problem cannam@167: by different processes are not the same, fftw_mpi_gather_wisdom cannam@167: will arbitrarily choose one of the plans.) Both of these functions cannam@167: may result in suboptimal plans for different processes if the cannam@167: processes are running on non-identical hardware. Both of these cannam@167: functions are collective calls, which means that they must be cannam@167: executed by all processes in the communicator. cannam@167: cannam@167:

cannam@167: cannam@167:

So, for example, a typical code snippet to import wisdom from a file cannam@167: and use it on all processes would be: cannam@167:

cannam@167:
cannam@167:
{
cannam@167:     int rank;
cannam@167: 
cannam@167:     fftw_mpi_init();
cannam@167:     MPI_Comm_rank(MPI_COMM_WORLD, &rank);
cannam@167:     if (rank == 0) fftw_import_wisdom_from_filename("mywisdom");
cannam@167:     fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD);
cannam@167: }
cannam@167: 
cannam@167: cannam@167:

(Note that we must call fftw_mpi_init before importing any cannam@167: wisdom that might contain MPI plans.) Similarly, a typical code cannam@167: snippet to export wisdom from all processes to a file is: cannam@167: cannam@167:

cannam@167:
cannam@167:
{
cannam@167:     int rank;
cannam@167: 
cannam@167:     fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
cannam@167:     MPI_Comm_rank(MPI_COMM_WORLD, &rank);
cannam@167:     if (rank == 0) fftw_export_wisdom_to_filename("mywisdom");
cannam@167: }
cannam@167: 
cannam@167: cannam@167:
cannam@167:
cannam@167:

Footnotes

cannam@167: cannam@167:

(7)

cannam@167:

In fact, cannam@167: even this assumption is not technically guaranteed by the standard, cannam@167: although it seems to be universal in actual MPI implementations and is cannam@167: widely assumed by MPI-using software. Technically, you need to query cannam@167: the MPI_IO attribute of MPI_COMM_WORLD with cannam@167: MPI_Attr_get. If this attribute is MPI_PROC_NULL, no cannam@167: I/O is possible. If it is MPI_ANY_SOURCE, any process can cannam@167: perform I/O. Otherwise, it is the rank of a process that can perform cannam@167: I/O ... but since it is not guaranteed to yield the same rank cannam@167: on all processes, you have to do an MPI_Allreduce of some kind cannam@167: if you want all processes to agree about which is going to do I/O. cannam@167: And even then, the standard only guarantees that this process can cannam@167: perform output, but not input. See e.g. Parallel Programming cannam@167: with MPI by P. S. Pacheco, section 8.1.3. Needless to say, in our cannam@167: experience virtually no MPI programmers worry about this.

cannam@167:
cannam@167:
cannam@167:
cannam@167:

cannam@167: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

cannam@167:
cannam@167: cannam@167: cannam@167: cannam@167: cannam@167: