Chris@19: Chris@19: Chris@19: FFTW MPI Wisdom - FFTW 3.3.4 Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19: Chris@19:
Chris@19: Chris@19:

Chris@19: Next: , Chris@19: Previous: FFTW MPI Transposes, Chris@19: Up: Distributed-memory FFTW with MPI Chris@19:


Chris@19:
Chris@19: Chris@19:

6.8 FFTW MPI Wisdom

Chris@19: Chris@19:

Chris@19: FFTW's “wisdom” facility (see Words of Wisdom-Saving Plans) can Chris@19: be used to save MPI plans as well as to save uniprocessor plans. Chris@19: However, for MPI there are several unavoidable complications. Chris@19: Chris@19:

First, the MPI standard does not guarantee that every process can Chris@19: perform file I/O (at least, not using C stdio routines)—in general, Chris@19: we may only assume that process 0 is capable of I/O.1 So, if we Chris@19: want to export the wisdom from a single process to a file, we must Chris@19: first export the wisdom to a string, then send it to process 0, then Chris@19: write it to a file. Chris@19: Chris@19:

Second, in principle we may want to have separate wisdom for every Chris@19: process, since in general the processes may run on different hardware Chris@19: even for a single MPI program. However, in practice FFTW's MPI code Chris@19: is designed for the case of homogeneous hardware (see Load balancing), and in this case it is convenient to use the same wisdom Chris@19: for every process. Thus, we need a mechanism to synchronize the wisdom. Chris@19: Chris@19:

To address both of these problems, FFTW provides the following two Chris@19: functions: Chris@19: Chris@19:

     void fftw_mpi_broadcast_wisdom(MPI_Comm comm);
Chris@19:      void fftw_mpi_gather_wisdom(MPI_Comm comm);
Chris@19: 
Chris@19:

Chris@19: Given a communicator comm, fftw_mpi_broadcast_wisdom Chris@19: will broadcast the wisdom from process 0 to all other processes. Chris@19: Conversely, fftw_mpi_gather_wisdom will collect wisdom from all Chris@19: processes onto process 0. (If the plans created for the same problem Chris@19: by different processes are not the same, fftw_mpi_gather_wisdom Chris@19: will arbitrarily choose one of the plans.) Both of these functions Chris@19: may result in suboptimal plans for different processes if the Chris@19: processes are running on non-identical hardware. Both of these Chris@19: functions are collective calls, which means that they must be Chris@19: executed by all processes in the communicator. Chris@19: Chris@19: Chris@19:

So, for example, a typical code snippet to import wisdom from a file Chris@19: and use it on all processes would be: Chris@19: Chris@19:

     {
Chris@19:          int rank;
Chris@19:      
Chris@19:          fftw_mpi_init();
Chris@19:          MPI_Comm_rank(MPI_COMM_WORLD, &rank);
Chris@19:          if (rank == 0) fftw_import_wisdom_from_filename("mywisdom");
Chris@19:          fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD);
Chris@19:      }
Chris@19: 
Chris@19:

(Note that we must call fftw_mpi_init before importing any Chris@19: wisdom that might contain MPI plans.) Similarly, a typical code Chris@19: snippet to export wisdom from all processes to a file is: Chris@19: Chris@19:

     {
Chris@19:          int rank;
Chris@19:      
Chris@19:          fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
Chris@19:          MPI_Comm_rank(MPI_COMM_WORLD, &rank);
Chris@19:          if (rank == 0) fftw_export_wisdom_to_filename("mywisdom");
Chris@19:      }
Chris@19: 
Chris@19: Chris@19:
Chris@19:
Chris@19:

Footnotes

[1] In fact, Chris@19: even this assumption is not technically guaranteed by the standard, Chris@19: although it seems to be universal in actual MPI implementations and is Chris@19: widely assumed by MPI-using software. Technically, you need to query Chris@19: the MPI_IO attribute of MPI_COMM_WORLD with Chris@19: MPI_Attr_get. If this attribute is MPI_PROC_NULL, no Chris@19: I/O is possible. If it is MPI_ANY_SOURCE, any process can Chris@19: perform I/O. Otherwise, it is the rank of a process that can perform Chris@19: I/O ... but since it is not guaranteed to yield the same rank Chris@19: on all processes, you have to do an MPI_Allreduce of some kind Chris@19: if you want all processes to agree about which is going to do I/O. Chris@19: And even then, the standard only guarantees that this process can Chris@19: perform output, but not input. See e.g. Parallel Programming Chris@19: with MPI by P. S. Pacheco, section 8.1.3. Needless to say, in our Chris@19: experience virtually no MPI programmers worry about this.

Chris@19: Chris@19:
Chris@19: Chris@19: Chris@19: