Chris@42: @node Distributed-memory FFTW with MPI, Calling FFTW from Modern Fortran, Multi-threaded FFTW, Top Chris@42: @chapter Distributed-memory FFTW with MPI Chris@42: @cindex MPI Chris@42: Chris@42: @cindex parallel transform Chris@42: In this chapter we document the parallel FFTW routines for parallel Chris@42: systems supporting the MPI message-passing interface. Unlike the Chris@42: shared-memory threads described in the previous chapter, MPI allows Chris@42: you to use @emph{distributed-memory} parallelism, where each CPU has Chris@42: its own separate memory, and which can scale up to clusters of many Chris@42: thousands of processors. This capability comes at a price, however: Chris@42: each process only stores a @emph{portion} of the data to be Chris@42: transformed, which means that the data structures and Chris@42: programming-interface are quite different from the serial or threads Chris@42: versions of FFTW. Chris@42: @cindex data distribution Chris@42: Chris@42: Chris@42: Distributed-memory parallelism is especially useful when you are Chris@42: transforming arrays so large that they do not fit into the memory of a Chris@42: single processor. The storage per-process required by FFTW's MPI Chris@42: routines is proportional to the total array size divided by the number Chris@42: of processes. Conversely, distributed-memory parallelism can easily Chris@42: pose an unacceptably high communications overhead for small problems; Chris@42: the threshold problem size for which parallelism becomes advantageous Chris@42: will depend on the precise problem you are interested in, your Chris@42: hardware, and your MPI implementation. Chris@42: Chris@42: A note on terminology: in MPI, you divide the data among a set of Chris@42: ``processes'' which each run in their own memory address space. Chris@42: Generally, each process runs on a different physical processor, but Chris@42: this is not required. A set of processes in MPI is described by an Chris@42: opaque data structure called a ``communicator,'' the most common of Chris@42: which is the predefined communicator @code{MPI_COMM_WORLD} which Chris@42: refers to @emph{all} processes. For more information on these and Chris@42: other concepts common to all MPI programs, we refer the reader to the Chris@42: documentation at @uref{http://www.mcs.anl.gov/research/projects/mpi/, the MPI home Chris@42: page}. Chris@42: @cindex MPI communicator Chris@42: @ctindex MPI_COMM_WORLD Chris@42: Chris@42: Chris@42: We assume in this chapter that the reader is familiar with the usage Chris@42: of the serial (uniprocessor) FFTW, and focus only on the concepts new Chris@42: to the MPI interface. Chris@42: Chris@42: @menu Chris@42: * FFTW MPI Installation:: Chris@42: * Linking and Initializing MPI FFTW:: Chris@42: * 2d MPI example:: Chris@42: * MPI Data Distribution:: Chris@42: * Multi-dimensional MPI DFTs of Real Data:: Chris@42: * Other Multi-dimensional Real-data MPI Transforms:: Chris@42: * FFTW MPI Transposes:: Chris@42: * FFTW MPI Wisdom:: Chris@42: * Avoiding MPI Deadlocks:: Chris@42: * FFTW MPI Performance Tips:: Chris@42: * Combining MPI and Threads:: Chris@42: * FFTW MPI Reference:: Chris@42: * FFTW MPI Fortran Interface:: Chris@42: @end menu Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node FFTW MPI Installation, Linking and Initializing MPI FFTW, Distributed-memory FFTW with MPI, Distributed-memory FFTW with MPI Chris@42: @section FFTW MPI Installation Chris@42: Chris@42: All of the FFTW MPI code is located in the @code{mpi} subdirectory of Chris@42: the FFTW package. On Unix systems, the FFTW MPI libraries and header Chris@42: files are automatically configured, compiled, and installed along with Chris@42: the uniprocessor FFTW libraries simply by including Chris@42: @code{--enable-mpi} in the flags to the @code{configure} script Chris@42: (@pxref{Installation on Unix}). Chris@42: @fpindex configure Chris@42: Chris@42: Chris@42: Any implementation of the MPI standard, version 1 or later, should Chris@42: work with FFTW. The @code{configure} script will attempt to Chris@42: automatically detect how to compile and link code using your MPI Chris@42: implementation. In some cases, especially if you have multiple Chris@42: different MPI implementations installed or have an unusual MPI Chris@42: software package, you may need to provide this information explicitly. Chris@42: Chris@42: Most commonly, one compiles MPI code by invoking a special compiler Chris@42: command, typically @code{mpicc} for C code. The @code{configure} Chris@42: script knows the most common names for this command, but you can Chris@42: specify the MPI compilation command explicitly by setting the Chris@42: @code{MPICC} variable, as in @samp{./configure MPICC=mpicc ...}. Chris@42: @fpindex mpicc Chris@42: Chris@42: Chris@42: If, instead of a special compiler command, you need to link a certain Chris@42: library, you can specify the link command via the @code{MPILIBS} Chris@42: variable, as in @samp{./configure MPILIBS=-lmpi ...}. Note that if Chris@42: your MPI library is installed in a non-standard location (one the Chris@42: compiler does not know about by default), you may also have to specify Chris@42: the location of the library and header files via @code{LDFLAGS} and Chris@42: @code{CPPFLAGS} variables, respectively, as in @samp{./configure Chris@42: LDFLAGS=-L/path/to/mpi/libs CPPFLAGS=-I/path/to/mpi/include ...}. Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node Linking and Initializing MPI FFTW, 2d MPI example, FFTW MPI Installation, Distributed-memory FFTW with MPI Chris@42: @section Linking and Initializing MPI FFTW Chris@42: Chris@42: Programs using the MPI FFTW routines should be linked with Chris@42: @code{-lfftw3_mpi -lfftw3 -lm} on Unix in double precision, Chris@42: @code{-lfftw3f_mpi -lfftw3f -lm} in single precision, and so on Chris@42: (@pxref{Precision}). You will also need to link with whatever library Chris@42: is responsible for MPI on your system; in most MPI implementations, Chris@42: there is a special compiler alias named @code{mpicc} to compile and Chris@42: link MPI code. Chris@42: @fpindex mpicc Chris@42: @cindex linking on Unix Chris@42: @cindex precision Chris@42: Chris@42: Chris@42: @findex fftw_init_threads Chris@42: Before calling any FFTW routines except possibly Chris@42: @code{fftw_init_threads} (@pxref{Combining MPI and Threads}), but after calling Chris@42: @code{MPI_Init}, you should call the function: Chris@42: Chris@42: @example Chris@42: void fftw_mpi_init(void); Chris@42: @end example Chris@42: @findex fftw_mpi_init Chris@42: Chris@42: If, at the end of your program, you want to get rid of all memory and Chris@42: other resources allocated internally by FFTW, for both the serial and Chris@42: MPI routines, you can call: Chris@42: Chris@42: @example Chris@42: void fftw_mpi_cleanup(void); Chris@42: @end example Chris@42: @findex fftw_mpi_cleanup Chris@42: Chris@42: which is much like the @code{fftw_cleanup()} function except that it Chris@42: also gets rid of FFTW's MPI-related data. You must @emph{not} execute Chris@42: any previously created plans after calling this function. Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node 2d MPI example, MPI Data Distribution, Linking and Initializing MPI FFTW, Distributed-memory FFTW with MPI Chris@42: @section 2d MPI example Chris@42: Chris@42: Before we document the FFTW MPI interface in detail, we begin with a Chris@42: simple example outlining how one would perform a two-dimensional Chris@42: @code{N0} by @code{N1} complex DFT. Chris@42: Chris@42: @example Chris@42: #include Chris@42: Chris@42: int main(int argc, char **argv) Chris@42: @{ Chris@42: const ptrdiff_t N0 = ..., N1 = ...; Chris@42: fftw_plan plan; Chris@42: fftw_complex *data; Chris@42: ptrdiff_t alloc_local, local_n0, local_0_start, i, j; Chris@42: Chris@42: MPI_Init(&argc, &argv); Chris@42: fftw_mpi_init(); Chris@42: Chris@42: /* @r{get local data size and allocate} */ Chris@42: alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD, Chris@42: &local_n0, &local_0_start); Chris@42: data = fftw_alloc_complex(alloc_local); Chris@42: Chris@42: /* @r{create plan for in-place forward DFT} */ Chris@42: plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD, Chris@42: FFTW_FORWARD, FFTW_ESTIMATE); Chris@42: Chris@42: /* @r{initialize data to some function} my_function(x,y) */ Chris@42: for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j) Chris@42: data[i*N1 + j] = my_function(local_0_start + i, j); Chris@42: Chris@42: /* @r{compute transforms, in-place, as many times as desired} */ Chris@42: fftw_execute(plan); Chris@42: Chris@42: fftw_destroy_plan(plan); Chris@42: Chris@42: MPI_Finalize(); Chris@42: @} Chris@42: @end example Chris@42: Chris@42: As can be seen above, the MPI interface follows the same basic style Chris@42: of allocate/plan/execute/destroy as the serial FFTW routines. All of Chris@42: the MPI-specific routines are prefixed with @samp{fftw_mpi_} instead Chris@42: of @samp{fftw_}. There are a few important differences, however: Chris@42: Chris@42: First, we must call @code{fftw_mpi_init()} after calling Chris@42: @code{MPI_Init} (required in all MPI programs) and before calling any Chris@42: other @samp{fftw_mpi_} routine. Chris@42: @findex MPI_Init Chris@42: @findex fftw_mpi_init Chris@42: Chris@42: Chris@42: Second, when we create the plan with @code{fftw_mpi_plan_dft_2d}, Chris@42: analogous to @code{fftw_plan_dft_2d}, we pass an additional argument: Chris@42: the communicator, indicating which processes will participate in the Chris@42: transform (here @code{MPI_COMM_WORLD}, indicating all processes). Chris@42: Whenever you create, execute, or destroy a plan for an MPI transform, Chris@42: you must call the corresponding FFTW routine on @emph{all} processes Chris@42: in the communicator for that transform. (That is, these are Chris@42: @emph{collective} calls.) Note that the plan for the MPI transform Chris@42: uses the standard @code{fftw_execute} and @code{fftw_destroy} routines Chris@42: (on the other hand, there are MPI-specific new-array execute functions Chris@42: documented below). Chris@42: @cindex collective function Chris@42: @findex fftw_mpi_plan_dft_2d Chris@42: @ctindex MPI_COMM_WORLD Chris@42: Chris@42: Chris@42: Third, all of the FFTW MPI routines take @code{ptrdiff_t} arguments Chris@42: instead of @code{int} as for the serial FFTW. @code{ptrdiff_t} is a Chris@42: standard C integer type which is (at least) 32 bits wide on a 32-bit Chris@42: machine and 64 bits wide on a 64-bit machine. This is to make it easy Chris@42: to specify very large parallel transforms on a 64-bit machine. (You Chris@42: can specify 64-bit transform sizes in the serial FFTW, too, but only Chris@42: by using the @samp{guru64} planner interface. @xref{64-bit Guru Chris@42: Interface}.) Chris@42: @tindex ptrdiff_t Chris@42: @cindex 64-bit architecture Chris@42: Chris@42: Chris@42: Fourth, and most importantly, you don't allocate the entire Chris@42: two-dimensional array on each process. Instead, you call Chris@42: @code{fftw_mpi_local_size_2d} to find out what @emph{portion} of the Chris@42: array resides on each processor, and how much space to allocate. Chris@42: Here, the portion of the array on each process is a @code{local_n0} by Chris@42: @code{N1} slice of the total array, starting at index Chris@42: @code{local_0_start}. The total number of @code{fftw_complex} numbers Chris@42: to allocate is given by the @code{alloc_local} return value, which Chris@42: @emph{may} be greater than @code{local_n0 * N1} (in case some Chris@42: intermediate calculations require additional storage). The data Chris@42: distribution in FFTW's MPI interface is described in more detail by Chris@42: the next section. Chris@42: @findex fftw_mpi_local_size_2d Chris@42: @cindex data distribution Chris@42: Chris@42: Chris@42: Given the portion of the array that resides on the local process, it Chris@42: is straightforward to initialize the data (here to a function Chris@42: @code{myfunction}) and otherwise manipulate it. Of course, at the end Chris@42: of the program you may want to output the data somehow, but Chris@42: synchronizing this output is up to you and is beyond the scope of this Chris@42: manual. (One good way to output a large multi-dimensional distributed Chris@42: array in MPI to a portable binary file is to use the free HDF5 Chris@42: library; see the @uref{http://www.hdfgroup.org/, HDF home page}.) Chris@42: @cindex HDF5 Chris@42: @cindex MPI I/O Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node MPI Data Distribution, Multi-dimensional MPI DFTs of Real Data, 2d MPI example, Distributed-memory FFTW with MPI Chris@42: @section MPI Data Distribution Chris@42: @cindex data distribution Chris@42: Chris@42: The most important concept to understand in using FFTW's MPI interface Chris@42: is the data distribution. With a serial or multithreaded FFT, all of Chris@42: the inputs and outputs are stored as a single contiguous chunk of Chris@42: memory. With a distributed-memory FFT, the inputs and outputs are Chris@42: broken into disjoint blocks, one per process. Chris@42: Chris@42: In particular, FFTW uses a @emph{1d block distribution} of the data, Chris@42: distributed along the @emph{first dimension}. For example, if you Chris@42: want to perform a @twodims{100,200} complex DFT, distributed over 4 Chris@42: processes, each process will get a @twodims{25,200} slice of the data. Chris@42: That is, process 0 will get rows 0 through 24, process 1 will get rows Chris@42: 25 through 49, process 2 will get rows 50 through 74, and process 3 Chris@42: will get rows 75 through 99. If you take the same array but Chris@42: distribute it over 3 processes, then it is not evenly divisible so the Chris@42: different processes will have unequal chunks. FFTW's default choice Chris@42: in this case is to assign 34 rows to processes 0 and 1, and 32 rows to Chris@42: process 2. Chris@42: @cindex block distribution Chris@42: Chris@42: Chris@42: FFTW provides several @samp{fftw_mpi_local_size} routines that you can Chris@42: call to find out what portion of an array is stored on the current Chris@42: process. In most cases, you should use the default block sizes picked Chris@42: by FFTW, but it is also possible to specify your own block size. For Chris@42: example, with a @twodims{100,200} array on three processes, you can Chris@42: tell FFTW to use a block size of 40, which would assign 40 rows to Chris@42: processes 0 and 1, and 20 rows to process 2. FFTW's default is to Chris@42: divide the data equally among the processes if possible, and as best Chris@42: it can otherwise. The rows are always assigned in ``rank order,'' Chris@42: i.e. process 0 gets the first block of rows, then process 1, and so Chris@42: on. (You can change this by using @code{MPI_Comm_split} to create a Chris@42: new communicator with re-ordered processes.) However, you should Chris@42: always call the @samp{fftw_mpi_local_size} routines, if possible, Chris@42: rather than trying to predict FFTW's distribution choices. Chris@42: Chris@42: In particular, it is critical that you allocate the storage size that Chris@42: is returned by @samp{fftw_mpi_local_size}, which is @emph{not} Chris@42: necessarily the size of the local slice of the array. The reason is Chris@42: that intermediate steps of FFTW's algorithms involve transposing the Chris@42: array and redistributing the data, so at these intermediate steps FFTW Chris@42: may require more local storage space (albeit always proportional to Chris@42: the total size divided by the number of processes). The Chris@42: @samp{fftw_mpi_local_size} functions know how much storage is required Chris@42: for these intermediate steps and tell you the correct amount to Chris@42: allocate. Chris@42: Chris@42: @menu Chris@42: * Basic and advanced distribution interfaces:: Chris@42: * Load balancing:: Chris@42: * Transposed distributions:: Chris@42: * One-dimensional distributions:: Chris@42: @end menu Chris@42: Chris@42: @node Basic and advanced distribution interfaces, Load balancing, MPI Data Distribution, MPI Data Distribution Chris@42: @subsection Basic and advanced distribution interfaces Chris@42: Chris@42: As with the planner interface, the @samp{fftw_mpi_local_size} Chris@42: distribution interface is broken into basic and advanced Chris@42: (@samp{_many}) interfaces, where the latter allows you to specify the Chris@42: block size manually and also to request block sizes when computing Chris@42: multiple transforms simultaneously. These functions are documented Chris@42: more exhaustively by the FFTW MPI Reference, but we summarize the Chris@42: basic ideas here using a couple of two-dimensional examples. Chris@42: Chris@42: For the @twodims{100,200} complex-DFT example, above, we would find Chris@42: the distribution by calling the following function in the basic Chris@42: interface: Chris@42: Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@42: @end example Chris@42: @findex fftw_mpi_local_size_2d Chris@42: Chris@42: Given the total size of the data to be transformed (here, @code{n0 = Chris@42: 100} and @code{n1 = 200}) and an MPI communicator (@code{comm}), this Chris@42: function provides three numbers. Chris@42: Chris@42: First, it describes the shape of the local data: the current process Chris@42: should store a @code{local_n0} by @code{n1} slice of the overall Chris@42: dataset, in row-major order (@code{n1} dimension contiguous), starting Chris@42: at index @code{local_0_start}. That is, if the total dataset is Chris@42: viewed as a @code{n0} by @code{n1} matrix, the current process should Chris@42: store the rows @code{local_0_start} to Chris@42: @code{local_0_start+local_n0-1}. Obviously, if you are running with Chris@42: only a single MPI process, that process will store the entire array: Chris@42: @code{local_0_start} will be zero and @code{local_n0} will be Chris@42: @code{n0}. @xref{Row-major Format}. Chris@42: @cindex row-major Chris@42: Chris@42: Chris@42: Second, the return value is the total number of data elements (e.g., Chris@42: complex numbers for a complex DFT) that should be allocated for the Chris@42: input and output arrays on the current process (ideally with Chris@42: @code{fftw_malloc} or an @samp{fftw_alloc} function, to ensure optimal Chris@42: alignment). It might seem that this should always be equal to Chris@42: @code{local_n0 * n1}, but this is @emph{not} the case. FFTW's Chris@42: distributed FFT algorithms require data redistributions at Chris@42: intermediate stages of the transform, and in some circumstances this Chris@42: may require slightly larger local storage. This is discussed in more Chris@42: detail below, under @ref{Load balancing}. Chris@42: @findex fftw_malloc Chris@42: @findex fftw_alloc_complex Chris@42: Chris@42: Chris@42: @cindex advanced interface Chris@42: The advanced-interface @samp{local_size} function for multidimensional Chris@42: transforms returns the same three things (@code{local_n0}, Chris@42: @code{local_0_start}, and the total number of elements to allocate), Chris@42: but takes more inputs: Chris@42: Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, Chris@42: ptrdiff_t howmany, Chris@42: ptrdiff_t block0, Chris@42: MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, Chris@42: ptrdiff_t *local_0_start); Chris@42: @end example Chris@42: @findex fftw_mpi_local_size_many Chris@42: Chris@42: The two-dimensional case above corresponds to @code{rnk = 2} and an Chris@42: array @code{n} of length 2 with @code{n[0] = n0} and @code{n[1] = n1}. Chris@42: This routine is for any @code{rnk > 1}; one-dimensional transforms Chris@42: have their own interface because they work slightly differently, as Chris@42: discussed below. Chris@42: Chris@42: First, the advanced interface allows you to perform multiple Chris@42: transforms at once, of interleaved data, as specified by the Chris@42: @code{howmany} parameter. (@code{hoamany} is 1 for a single Chris@42: transform.) Chris@42: Chris@42: Second, here you can specify your desired block size in the @code{n0} Chris@42: dimension, @code{block0}. To use FFTW's default block size, pass Chris@42: @code{FFTW_MPI_DEFAULT_BLOCK} (0) for @code{block0}. Otherwise, on Chris@42: @code{P} processes, FFTW will return @code{local_n0} equal to Chris@42: @code{block0} on the first @code{P / block0} processes (rounded down), Chris@42: return @code{local_n0} equal to @code{n0 - block0 * (P / block0)} on Chris@42: the next process, and @code{local_n0} equal to zero on any remaining Chris@42: processes. In general, we recommend using the default block size Chris@42: (which corresponds to @code{n0 / P}, rounded up). Chris@42: @ctindex FFTW_MPI_DEFAULT_BLOCK Chris@42: @cindex block distribution Chris@42: Chris@42: Chris@42: For example, suppose you have @code{P = 4} processes and @code{n0 = Chris@42: 21}. The default will be a block size of @code{6}, which will give Chris@42: @code{local_n0 = 6} on the first three processes and @code{local_n0 = Chris@42: 3} on the last process. Instead, however, you could specify Chris@42: @code{block0 = 5} if you wanted, which would give @code{local_n0 = 5} Chris@42: on processes 0 to 2, @code{local_n0 = 6} on process 3. (This choice, Chris@42: while it may look superficially more ``balanced,'' has the same Chris@42: critical path as FFTW's default but requires more communications.) Chris@42: Chris@42: @node Load balancing, Transposed distributions, Basic and advanced distribution interfaces, MPI Data Distribution Chris@42: @subsection Load balancing Chris@42: @cindex load balancing Chris@42: Chris@42: Ideally, when you parallelize a transform over some @math{P} Chris@42: processes, each process should end up with work that takes equal time. Chris@42: Otherwise, all of the processes end up waiting on whichever process is Chris@42: slowest. This goal is known as ``load balancing.'' In this section, Chris@42: we describe the circumstances under which FFTW is able to load-balance Chris@42: well, and in particular how you should choose your transform size in Chris@42: order to load balance. Chris@42: Chris@42: Load balancing is especially difficult when you are parallelizing over Chris@42: heterogeneous machines; for example, if one of your processors is a Chris@42: old 486 and another is a Pentium IV, obviously you should give the Chris@42: Pentium more work to do than the 486 since the latter is much slower. Chris@42: FFTW does not deal with this problem, however---it assumes that your Chris@42: processes run on hardware of comparable speed, and that the goal is Chris@42: therefore to divide the problem as equally as possible. Chris@42: Chris@42: For a multi-dimensional complex DFT, FFTW can divide the problem Chris@42: equally among the processes if: (i) the @emph{first} dimension Chris@42: @code{n0} is divisible by @math{P}; and (ii), the @emph{product} of Chris@42: the subsequent dimensions is divisible by @math{P}. (For the advanced Chris@42: interface, where you can specify multiple simultaneous transforms via Chris@42: some ``vector'' length @code{howmany}, a factor of @code{howmany} is Chris@42: included in the product of the subsequent dimensions.) Chris@42: Chris@42: For a one-dimensional complex DFT, the length @code{N} of the data Chris@42: should be divisible by @math{P} @emph{squared} to be able to divide Chris@42: the problem equally among the processes. Chris@42: Chris@42: @node Transposed distributions, One-dimensional distributions, Load balancing, MPI Data Distribution Chris@42: @subsection Transposed distributions Chris@42: Chris@42: Internally, FFTW's MPI transform algorithms work by first computing Chris@42: transforms of the data local to each process, then by globally Chris@42: @emph{transposing} the data in some fashion to redistribute the data Chris@42: among the processes, transforming the new data local to each process, Chris@42: and transposing back. For example, a two-dimensional @code{n0} by Chris@42: @code{n1} array, distributed across the @code{n0} dimension, is Chris@42: transformd by: (i) transforming the @code{n1} dimension, which are Chris@42: local to each process; (ii) transposing to an @code{n1} by @code{n0} Chris@42: array, distributed across the @code{n1} dimension; (iii) transforming Chris@42: the @code{n0} dimension, which is now local to each process; (iv) Chris@42: transposing back. Chris@42: @cindex transpose Chris@42: Chris@42: Chris@42: However, in many applications it is acceptable to compute a Chris@42: multidimensional DFT whose results are produced in transposed order Chris@42: (e.g., @code{n1} by @code{n0} in two dimensions). This provides a Chris@42: significant performance advantage, because it means that the final Chris@42: transposition step can be omitted. FFTW supports this optimization, Chris@42: which you specify by passing the flag @code{FFTW_MPI_TRANSPOSED_OUT} Chris@42: to the planner routines. To compute the inverse transform of Chris@42: transposed output, you specify @code{FFTW_MPI_TRANSPOSED_IN} to tell Chris@42: it that the input is transposed. In this section, we explain how to Chris@42: interpret the output format of such a transform. Chris@42: @ctindex FFTW_MPI_TRANSPOSED_OUT Chris@42: @ctindex FFTW_MPI_TRANSPOSED_IN Chris@42: Chris@42: Chris@42: Suppose you have are transforming multi-dimensional data with (at Chris@42: least two) dimensions @ndims{}. As always, it is distributed along Chris@42: the first dimension @dimk{0}. Now, if we compute its DFT with the Chris@42: @code{FFTW_MPI_TRANSPOSED_OUT} flag, the resulting output data are stored Chris@42: with the first @emph{two} dimensions transposed: @ndimstrans{}, Chris@42: distributed along the @dimk{1} dimension. Conversely, if we take the Chris@42: @ndimstrans{} data and transform it with the Chris@42: @code{FFTW_MPI_TRANSPOSED_IN} flag, then the format goes back to the Chris@42: original @ndims{} array. Chris@42: Chris@42: There are two ways to find the portion of the transposed array that Chris@42: resides on the current process. First, you can simply call the Chris@42: appropriate @samp{local_size} function, passing @ndimstrans{} (the Chris@42: transposed dimensions). This would mean calling the @samp{local_size} Chris@42: function twice, once for the transposed and once for the Chris@42: non-transposed dimensions. Alternatively, you can call one of the Chris@42: @samp{local_size_transposed} functions, which returns both the Chris@42: non-transposed and transposed data distribution from a single call. Chris@42: For example, for a 3d transform with transposed output (or input), you Chris@42: might call: Chris@42: Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_3d_transposed( Chris@42: ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42: @end example Chris@42: @findex fftw_mpi_local_size_3d_transposed Chris@42: Chris@42: Here, @code{local_n0} and @code{local_0_start} give the size and Chris@42: starting index of the @code{n0} dimension for the Chris@42: @emph{non}-transposed data, as in the previous sections. For Chris@42: @emph{transposed} data (e.g. the output for Chris@42: @code{FFTW_MPI_TRANSPOSED_OUT}), @code{local_n1} and Chris@42: @code{local_1_start} give the size and starting index of the @code{n1} Chris@42: dimension, which is the first dimension of the transposed data Chris@42: (@code{n1} by @code{n0} by @code{n2}). Chris@42: Chris@42: (Note that @code{FFTW_MPI_TRANSPOSED_IN} is completely equivalent to Chris@42: performing @code{FFTW_MPI_TRANSPOSED_OUT} and passing the first two Chris@42: dimensions to the planner in reverse order, or vice versa. If you Chris@42: pass @emph{both} the @code{FFTW_MPI_TRANSPOSED_IN} and Chris@42: @code{FFTW_MPI_TRANSPOSED_OUT} flags, it is equivalent to swapping the Chris@42: first two dimensions passed to the planner and passing @emph{neither} Chris@42: flag.) Chris@42: Chris@42: @node One-dimensional distributions, , Transposed distributions, MPI Data Distribution Chris@42: @subsection One-dimensional distributions Chris@42: Chris@42: For one-dimensional distributed DFTs using FFTW, matters are slightly Chris@42: more complicated because the data distribution is more closely tied to Chris@42: how the algorithm works. In particular, you can no longer pass an Chris@42: arbitrary block size and must accept FFTW's default; also, the block Chris@42: sizes may be different for input and output. Also, the data Chris@42: distribution depends on the flags and transform direction, in order Chris@42: for forward and backward transforms to work correctly. Chris@42: Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_1d(ptrdiff_t n0, MPI_Comm comm, Chris@42: int sign, unsigned flags, Chris@42: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, Chris@42: ptrdiff_t *local_no, ptrdiff_t *local_o_start); Chris@42: @end example Chris@42: @findex fftw_mpi_local_size_1d Chris@42: Chris@42: This function computes the data distribution for a 1d transform of Chris@42: size @code{n0} with the given transform @code{sign} and @code{flags}. Chris@42: Both input and output data use block distributions. The input on the Chris@42: current process will consist of @code{local_ni} numbers starting at Chris@42: index @code{local_i_start}; e.g. if only a single process is used, Chris@42: then @code{local_ni} will be @code{n0} and @code{local_i_start} will Chris@42: be @code{0}. Similarly for the output, with @code{local_no} numbers Chris@42: starting at index @code{local_o_start}. The return value of Chris@42: @code{fftw_mpi_local_size_1d} will be the total number of elements to Chris@42: allocate on the current process (which might be slightly larger than Chris@42: the local size due to intermediate steps in the algorithm). Chris@42: Chris@42: As mentioned above (@pxref{Load balancing}), the data will be divided Chris@42: equally among the processes if @code{n0} is divisible by the Chris@42: @emph{square} of the number of processes. In this case, Chris@42: @code{local_ni} will equal @code{local_no}. Otherwise, they may be Chris@42: different. Chris@42: Chris@42: For some applications, such as convolutions, the order of the output Chris@42: data is irrelevant. In this case, performance can be improved by Chris@42: specifying that the output data be stored in an FFTW-defined Chris@42: ``scrambled'' format. (In particular, this is the analogue of Chris@42: transposed output in the multidimensional case: scrambled output saves Chris@42: a communications step.) If you pass @code{FFTW_MPI_SCRAMBLED_OUT} in Chris@42: the flags, then the output is stored in this (undocumented) scrambled Chris@42: order. Conversely, to perform the inverse transform of data in Chris@42: scrambled order, pass the @code{FFTW_MPI_SCRAMBLED_IN} flag. Chris@42: @ctindex FFTW_MPI_SCRAMBLED_OUT Chris@42: @ctindex FFTW_MPI_SCRAMBLED_IN Chris@42: Chris@42: Chris@42: In MPI FFTW, only composite sizes @code{n0} can be parallelized; we Chris@42: have not yet implemented a parallel algorithm for large prime sizes. Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node Multi-dimensional MPI DFTs of Real Data, Other Multi-dimensional Real-data MPI Transforms, MPI Data Distribution, Distributed-memory FFTW with MPI Chris@42: @section Multi-dimensional MPI DFTs of Real Data Chris@42: Chris@42: FFTW's MPI interface also supports multi-dimensional DFTs of real Chris@42: data, similar to the serial r2c and c2r interfaces. (Parallel Chris@42: one-dimensional real-data DFTs are not currently supported; you must Chris@42: use a complex transform and set the imaginary parts of the inputs to Chris@42: zero.) Chris@42: Chris@42: The key points to understand for r2c and c2r MPI transforms (compared Chris@42: to the MPI complex DFTs or the serial r2c/c2r transforms), are: Chris@42: Chris@42: @itemize @bullet Chris@42: Chris@42: @item Chris@42: Just as for serial transforms, r2c/c2r DFTs transform @ndims{} real Chris@42: data to/from @ndimshalf{} complex data: the last dimension of the Chris@42: complex data is cut in half (rounded down), plus one. As for the Chris@42: serial transforms, the sizes you pass to the @samp{plan_dft_r2c} and Chris@42: @samp{plan_dft_c2r} are the @ndims{} dimensions of the real data. Chris@42: Chris@42: @item Chris@42: @cindex padding Chris@42: Although the real data is @emph{conceptually} @ndims{}, it is Chris@42: @emph{physically} stored as an @ndimspad{} array, where the last Chris@42: dimension has been @emph{padded} to make it the same size as the Chris@42: complex output. This is much like the in-place serial r2c/c2r Chris@42: interface (@pxref{Multi-Dimensional DFTs of Real Data}), except that Chris@42: in MPI the padding is required even for out-of-place data. The extra Chris@42: padding numbers are ignored by FFTW (they are @emph{not} like Chris@42: zero-padding the transform to a larger size); they are only used to Chris@42: determine the data layout. Chris@42: Chris@42: @item Chris@42: @cindex data distribution Chris@42: The data distribution in MPI for @emph{both} the real and complex data Chris@42: is determined by the shape of the @emph{complex} data. That is, you Chris@42: call the appropriate @samp{local size} function for the @ndimshalf{} Chris@42: complex data, and then use the @emph{same} distribution for the real Chris@42: data except that the last complex dimension is replaced by a (padded) Chris@42: real dimension of twice the length. Chris@42: Chris@42: @end itemize Chris@42: Chris@42: For example suppose we are performing an out-of-place r2c transform of Chris@42: @threedims{L,M,N} real data [padded to @threedims{L,M,2(N/2+1)}], Chris@42: resulting in @threedims{L,M,N/2+1} complex data. Similar to the Chris@42: example in @ref{2d MPI example}, we might do something like: Chris@42: Chris@42: @example Chris@42: #include Chris@42: Chris@42: int main(int argc, char **argv) Chris@42: @{ Chris@42: const ptrdiff_t L = ..., M = ..., N = ...; Chris@42: fftw_plan plan; Chris@42: double *rin; Chris@42: fftw_complex *cout; Chris@42: ptrdiff_t alloc_local, local_n0, local_0_start, i, j, k; Chris@42: Chris@42: MPI_Init(&argc, &argv); Chris@42: fftw_mpi_init(); Chris@42: Chris@42: /* @r{get local data size and allocate} */ Chris@42: alloc_local = fftw_mpi_local_size_3d(L, M, N/2+1, MPI_COMM_WORLD, Chris@42: &local_n0, &local_0_start); Chris@42: rin = fftw_alloc_real(2 * alloc_local); Chris@42: cout = fftw_alloc_complex(alloc_local); Chris@42: Chris@42: /* @r{create plan for out-of-place r2c DFT} */ Chris@42: plan = fftw_mpi_plan_dft_r2c_3d(L, M, N, rin, cout, MPI_COMM_WORLD, Chris@42: FFTW_MEASURE); Chris@42: Chris@42: /* @r{initialize rin to some function} my_func(x,y,z) */ Chris@42: for (i = 0; i < local_n0; ++i) Chris@42: for (j = 0; j < M; ++j) Chris@42: for (k = 0; k < N; ++k) Chris@42: rin[(i*M + j) * (2*(N/2+1)) + k] = my_func(local_0_start+i, j, k); Chris@42: Chris@42: /* @r{compute transforms as many times as desired} */ Chris@42: fftw_execute(plan); Chris@42: Chris@42: fftw_destroy_plan(plan); Chris@42: Chris@42: MPI_Finalize(); Chris@42: @} Chris@42: @end example Chris@42: Chris@42: @findex fftw_alloc_real Chris@42: @cindex row-major Chris@42: Note that we allocated @code{rin} using @code{fftw_alloc_real} with an Chris@42: argument of @code{2 * alloc_local}: since @code{alloc_local} is the Chris@42: number of @emph{complex} values to allocate, the number of @emph{real} Chris@42: values is twice as many. The @code{rin} array is then Chris@42: @threedims{local_n0,M,2(N/2+1)} in row-major order, so its Chris@42: @code{(i,j,k)} element is at the index @code{(i*M + j) * (2*(N/2+1)) + Chris@42: k} (@pxref{Multi-dimensional Array Format }). Chris@42: Chris@42: @cindex transpose Chris@42: @ctindex FFTW_TRANSPOSED_OUT Chris@42: @ctindex FFTW_TRANSPOSED_IN Chris@42: As for the complex transforms, improved performance can be obtained by Chris@42: specifying that the output is the transpose of the input or vice versa Chris@42: (@pxref{Transposed distributions}). In our @threedims{L,M,N} r2c Chris@42: example, including @code{FFTW_TRANSPOSED_OUT} in the flags means that Chris@42: the input would be a padded @threedims{L,M,2(N/2+1)} real array Chris@42: distributed over the @code{L} dimension, while the output would be a Chris@42: @threedims{M,L,N/2+1} complex array distributed over the @code{M} Chris@42: dimension. To perform the inverse c2r transform with the same data Chris@42: distributions, you would use the @code{FFTW_TRANSPOSED_IN} flag. Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node Other Multi-dimensional Real-data MPI Transforms, FFTW MPI Transposes, Multi-dimensional MPI DFTs of Real Data, Distributed-memory FFTW with MPI Chris@42: @section Other multi-dimensional Real-Data MPI Transforms Chris@42: Chris@42: @cindex r2r Chris@42: FFTW's MPI interface also supports multi-dimensional @samp{r2r} Chris@42: transforms of all kinds supported by the serial interface Chris@42: (e.g. discrete cosine and sine transforms, discrete Hartley Chris@42: transforms, etc.). Only multi-dimensional @samp{r2r} transforms, not Chris@42: one-dimensional transforms, are currently parallelized. Chris@42: Chris@42: @tindex fftw_r2r_kind Chris@42: These are used much like the multidimensional complex DFTs discussed Chris@42: above, except that the data is real rather than complex, and one needs Chris@42: to pass an r2r transform kind (@code{fftw_r2r_kind}) for each Chris@42: dimension as in the serial FFTW (@pxref{More DFTs of Real Data}). Chris@42: Chris@42: For example, one might perform a two-dimensional @twodims{L,M} that is Chris@42: an REDFT10 (DCT-II) in the first dimension and an RODFT10 (DST-II) in Chris@42: the second dimension with code like: Chris@42: Chris@42: @example Chris@42: const ptrdiff_t L = ..., M = ...; Chris@42: fftw_plan plan; Chris@42: double *data; Chris@42: ptrdiff_t alloc_local, local_n0, local_0_start, i, j; Chris@42: Chris@42: /* @r{get local data size and allocate} */ Chris@42: alloc_local = fftw_mpi_local_size_2d(L, M, MPI_COMM_WORLD, Chris@42: &local_n0, &local_0_start); Chris@42: data = fftw_alloc_real(alloc_local); Chris@42: Chris@42: /* @r{create plan for in-place REDFT10 x RODFT10} */ Chris@42: plan = fftw_mpi_plan_r2r_2d(L, M, data, data, MPI_COMM_WORLD, Chris@42: FFTW_REDFT10, FFTW_RODFT10, FFTW_MEASURE); Chris@42: Chris@42: /* @r{initialize data to some function} my_function(x,y) */ Chris@42: for (i = 0; i < local_n0; ++i) for (j = 0; j < M; ++j) Chris@42: data[i*M + j] = my_function(local_0_start + i, j); Chris@42: Chris@42: /* @r{compute transforms, in-place, as many times as desired} */ Chris@42: fftw_execute(plan); Chris@42: Chris@42: fftw_destroy_plan(plan); Chris@42: @end example Chris@42: Chris@42: @findex fftw_alloc_real Chris@42: Notice that we use the same @samp{local_size} functions as we did for Chris@42: complex data, only now we interpret the sizes in terms of real rather Chris@42: than complex values, and correspondingly use @code{fftw_alloc_real}. Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node FFTW MPI Transposes, FFTW MPI Wisdom, Other Multi-dimensional Real-data MPI Transforms, Distributed-memory FFTW with MPI Chris@42: @section FFTW MPI Transposes Chris@42: @cindex transpose Chris@42: Chris@42: The FFTW's MPI Fourier transforms rely on one or more @emph{global Chris@42: transposition} step for their communications. For example, the Chris@42: multidimensional transforms work by transforming along some Chris@42: dimensions, then transposing to make the first dimension local and Chris@42: transforming that, then transposing back. Because global Chris@42: transposition of a block-distributed matrix has many other potential Chris@42: uses besides FFTs, FFTW's transpose routines can be called directly, Chris@42: as documented in this section. Chris@42: Chris@42: @menu Chris@42: * Basic distributed-transpose interface:: Chris@42: * Advanced distributed-transpose interface:: Chris@42: * An improved replacement for MPI_Alltoall:: Chris@42: @end menu Chris@42: Chris@42: @node Basic distributed-transpose interface, Advanced distributed-transpose interface, FFTW MPI Transposes, FFTW MPI Transposes Chris@42: @subsection Basic distributed-transpose interface Chris@42: Chris@42: In particular, suppose that we have an @code{n0} by @code{n1} array in Chris@42: row-major order, block-distributed across the @code{n0} dimension. To Chris@42: transpose this into an @code{n1} by @code{n0} array block-distributed Chris@42: across the @code{n1} dimension, we would create a plan by calling the Chris@42: following function: Chris@42: Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: @end example Chris@42: @findex fftw_mpi_plan_transpose Chris@42: Chris@42: The input and output arrays (@code{in} and @code{out}) can be the Chris@42: same. The transpose is actually executed by calling Chris@42: @code{fftw_execute} on the plan, as usual. Chris@42: @findex fftw_execute Chris@42: Chris@42: Chris@42: The @code{flags} are the usual FFTW planner flags, but support Chris@42: two additional flags: @code{FFTW_MPI_TRANSPOSED_OUT} and/or Chris@42: @code{FFTW_MPI_TRANSPOSED_IN}. What these flags indicate, for Chris@42: transpose plans, is that the output and/or input, respectively, are Chris@42: @emph{locally} transposed. That is, on each process input data is Chris@42: normally stored as a @code{local_n0} by @code{n1} array in row-major Chris@42: order, but for an @code{FFTW_MPI_TRANSPOSED_IN} plan the input data is Chris@42: stored as @code{n1} by @code{local_n0} in row-major order. Similarly, Chris@42: @code{FFTW_MPI_TRANSPOSED_OUT} means that the output is @code{n0} by Chris@42: @code{local_n1} instead of @code{local_n1} by @code{n0}. Chris@42: @ctindex FFTW_MPI_TRANSPOSED_OUT Chris@42: @ctindex FFTW_MPI_TRANSPOSED_IN Chris@42: Chris@42: Chris@42: To determine the local size of the array on each process before and Chris@42: after the transpose, as well as the amount of storage that must be Chris@42: allocated, one should call @code{fftw_mpi_local_size_2d_transposed}, Chris@42: just as for a 2d DFT as described in the previous section: Chris@42: @cindex data distribution Chris@42: Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_2d_transposed Chris@42: (ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42: @end example Chris@42: @findex fftw_mpi_local_size_2d_transposed Chris@42: Chris@42: Again, the return value is the local storage to allocate, which in Chris@42: this case is the number of @emph{real} (@code{double}) values rather Chris@42: than complex numbers as in the previous examples. Chris@42: Chris@42: @node Advanced distributed-transpose interface, An improved replacement for MPI_Alltoall, Basic distributed-transpose interface, FFTW MPI Transposes Chris@42: @subsection Advanced distributed-transpose interface Chris@42: Chris@42: The above routines are for a transpose of a matrix of numbers (of type Chris@42: @code{double}), using FFTW's default block sizes. More generally, one Chris@42: can perform transposes of @emph{tuples} of numbers, with Chris@42: user-specified block sizes for the input and output: Chris@42: Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_many_transpose Chris@42: (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany, Chris@42: ptrdiff_t block0, ptrdiff_t block1, Chris@42: double *in, double *out, MPI_Comm comm, unsigned flags); Chris@42: @end example Chris@42: @findex fftw_mpi_plan_many_transpose Chris@42: Chris@42: In this case, one is transposing an @code{n0} by @code{n1} matrix of Chris@42: @code{howmany}-tuples (e.g. @code{howmany = 2} for complex numbers). Chris@42: The input is distributed along the @code{n0} dimension with block size Chris@42: @code{block0}, and the @code{n1} by @code{n0} output is distributed Chris@42: along the @code{n1} dimension with block size @code{block1}. If Chris@42: @code{FFTW_MPI_DEFAULT_BLOCK} (0) is passed for a block size then FFTW Chris@42: uses its default block size. To get the local size of the data on Chris@42: each process, you should then call @code{fftw_mpi_local_size_many_transposed}. Chris@42: @ctindex FFTW_MPI_DEFAULT_BLOCK Chris@42: @findex fftw_mpi_local_size_many_transposed Chris@42: Chris@42: @node An improved replacement for MPI_Alltoall, , Advanced distributed-transpose interface, FFTW MPI Transposes Chris@42: @subsection An improved replacement for MPI_Alltoall Chris@42: Chris@42: We close this section by noting that FFTW's MPI transpose routines can Chris@42: be thought of as a generalization for the @code{MPI_Alltoall} function Chris@42: (albeit only for floating-point types), and in some circumstances can Chris@42: function as an improved replacement. Chris@42: @findex MPI_Alltoall Chris@42: Chris@42: Chris@42: @code{MPI_Alltoall} is defined by the MPI standard as: Chris@42: Chris@42: @example Chris@42: int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype, Chris@42: void *recvbuf, int recvcnt, MPI_Datatype recvtype, Chris@42: MPI_Comm comm); Chris@42: @end example Chris@42: Chris@42: In particular, for @code{double*} arrays @code{in} and @code{out}, Chris@42: consider the call: Chris@42: Chris@42: @example Chris@42: MPI_Alltoall(in, howmany, MPI_DOUBLE, out, howmany MPI_DOUBLE, comm); Chris@42: @end example Chris@42: Chris@42: This is completely equivalent to: Chris@42: Chris@42: @example Chris@42: MPI_Comm_size(comm, &P); Chris@42: plan = fftw_mpi_plan_many_transpose(P, P, howmany, 1, 1, in, out, comm, FFTW_ESTIMATE); Chris@42: fftw_execute(plan); Chris@42: fftw_destroy_plan(plan); Chris@42: @end example Chris@42: Chris@42: That is, computing a @twodims{P,P} transpose on @code{P} processes, Chris@42: with a block size of 1, is just a standard all-to-all communication. Chris@42: Chris@42: However, using the FFTW routine instead of @code{MPI_Alltoall} may Chris@42: have certain advantages. First of all, FFTW's routine can operate Chris@42: in-place (@code{in == out}) whereas @code{MPI_Alltoall} can only Chris@42: operate out-of-place. Chris@42: @cindex in-place Chris@42: Chris@42: Chris@42: Second, even for out-of-place plans, FFTW's routine may be faster, Chris@42: especially if you need to perform the all-to-all communication many Chris@42: times and can afford to use @code{FFTW_MEASURE} or Chris@42: @code{FFTW_PATIENT}. It should certainly be no slower, not including Chris@42: the time to create the plan, since one of the possible algorithms that Chris@42: FFTW uses for an out-of-place transpose @emph{is} simply to call Chris@42: @code{MPI_Alltoall}. However, FFTW also considers several other Chris@42: possible algorithms that, depending on your MPI implementation and Chris@42: your hardware, may be faster. Chris@42: @ctindex FFTW_MEASURE Chris@42: @ctindex FFTW_PATIENT Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node FFTW MPI Wisdom, Avoiding MPI Deadlocks, FFTW MPI Transposes, Distributed-memory FFTW with MPI Chris@42: @section FFTW MPI Wisdom Chris@42: @cindex wisdom Chris@42: @cindex saving plans to disk Chris@42: Chris@42: FFTW's ``wisdom'' facility (@pxref{Words of Wisdom-Saving Plans}) can Chris@42: be used to save MPI plans as well as to save uniprocessor plans. Chris@42: However, for MPI there are several unavoidable complications. Chris@42: Chris@42: @cindex MPI I/O Chris@42: First, the MPI standard does not guarantee that every process can Chris@42: perform file I/O (at least, not using C stdio routines)---in general, Chris@42: we may only assume that process 0 is capable of I/O.@footnote{In fact, Chris@42: even this assumption is not technically guaranteed by the standard, Chris@42: although it seems to be universal in actual MPI implementations and is Chris@42: widely assumed by MPI-using software. Technically, you need to query Chris@42: the @code{MPI_IO} attribute of @code{MPI_COMM_WORLD} with Chris@42: @code{MPI_Attr_get}. If this attribute is @code{MPI_PROC_NULL}, no Chris@42: I/O is possible. If it is @code{MPI_ANY_SOURCE}, any process can Chris@42: perform I/O. Otherwise, it is the rank of a process that can perform Chris@42: I/O ... but since it is not guaranteed to yield the @emph{same} rank Chris@42: on all processes, you have to do an @code{MPI_Allreduce} of some kind Chris@42: if you want all processes to agree about which is going to do I/O. Chris@42: And even then, the standard only guarantees that this process can Chris@42: perform output, but not input. See e.g. @cite{Parallel Programming Chris@42: with MPI} by P. S. Pacheco, section 8.1.3. Needless to say, in our Chris@42: experience virtually no MPI programmers worry about this.} So, if we Chris@42: want to export the wisdom from a single process to a file, we must Chris@42: first export the wisdom to a string, then send it to process 0, then Chris@42: write it to a file. Chris@42: Chris@42: Second, in principle we may want to have separate wisdom for every Chris@42: process, since in general the processes may run on different hardware Chris@42: even for a single MPI program. However, in practice FFTW's MPI code Chris@42: is designed for the case of homogeneous hardware (@pxref{Load Chris@42: balancing}), and in this case it is convenient to use the same wisdom Chris@42: for every process. Thus, we need a mechanism to synchronize the wisdom. Chris@42: Chris@42: To address both of these problems, FFTW provides the following two Chris@42: functions: Chris@42: Chris@42: @example Chris@42: void fftw_mpi_broadcast_wisdom(MPI_Comm comm); Chris@42: void fftw_mpi_gather_wisdom(MPI_Comm comm); Chris@42: @end example Chris@42: @findex fftw_mpi_gather_wisdom Chris@42: @findex fftw_mpi_broadcast_wisdom Chris@42: Chris@42: Given a communicator @code{comm}, @code{fftw_mpi_broadcast_wisdom} Chris@42: will broadcast the wisdom from process 0 to all other processes. Chris@42: Conversely, @code{fftw_mpi_gather_wisdom} will collect wisdom from all Chris@42: processes onto process 0. (If the plans created for the same problem Chris@42: by different processes are not the same, @code{fftw_mpi_gather_wisdom} Chris@42: will arbitrarily choose one of the plans.) Both of these functions Chris@42: may result in suboptimal plans for different processes if the Chris@42: processes are running on non-identical hardware. Both of these Chris@42: functions are @emph{collective} calls, which means that they must be Chris@42: executed by all processes in the communicator. Chris@42: @cindex collective function Chris@42: Chris@42: Chris@42: So, for example, a typical code snippet to import wisdom from a file Chris@42: and use it on all processes would be: Chris@42: Chris@42: @example Chris@42: @{ Chris@42: int rank; Chris@42: Chris@42: fftw_mpi_init(); Chris@42: MPI_Comm_rank(MPI_COMM_WORLD, &rank); Chris@42: if (rank == 0) fftw_import_wisdom_from_filename("mywisdom"); Chris@42: fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD); Chris@42: @} Chris@42: @end example Chris@42: Chris@42: (Note that we must call @code{fftw_mpi_init} before importing any Chris@42: wisdom that might contain MPI plans.) Similarly, a typical code Chris@42: snippet to export wisdom from all processes to a file is: Chris@42: @findex fftw_mpi_init Chris@42: Chris@42: @example Chris@42: @{ Chris@42: int rank; Chris@42: Chris@42: fftw_mpi_gather_wisdom(MPI_COMM_WORLD); Chris@42: MPI_Comm_rank(MPI_COMM_WORLD, &rank); Chris@42: if (rank == 0) fftw_export_wisdom_to_filename("mywisdom"); Chris@42: @} Chris@42: @end example Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node Avoiding MPI Deadlocks, FFTW MPI Performance Tips, FFTW MPI Wisdom, Distributed-memory FFTW with MPI Chris@42: @section Avoiding MPI Deadlocks Chris@42: @cindex deadlock Chris@42: Chris@42: An MPI program can @emph{deadlock} if one process is waiting for a Chris@42: message from another process that never gets sent. To avoid deadlocks Chris@42: when using FFTW's MPI routines, it is important to know which Chris@42: functions are @emph{collective}: that is, which functions must Chris@42: @emph{always} be called in the @emph{same order} from @emph{every} Chris@42: process in a given communicator. (For example, @code{MPI_Barrier} is Chris@42: the canonical example of a collective function in the MPI standard.) Chris@42: @cindex collective function Chris@42: @findex MPI_Barrier Chris@42: Chris@42: Chris@42: The functions in FFTW that are @emph{always} collective are: every Chris@42: function beginning with @samp{fftw_mpi_plan}, as well as Chris@42: @code{fftw_mpi_broadcast_wisdom} and @code{fftw_mpi_gather_wisdom}. Chris@42: Also, the following functions from the ordinary FFTW interface are Chris@42: collective when they are applied to a plan created by an Chris@42: @samp{fftw_mpi_plan} function: @code{fftw_execute}, Chris@42: @code{fftw_destroy_plan}, and @code{fftw_flops}. Chris@42: @findex fftw_execute Chris@42: @findex fftw_destroy_plan Chris@42: @findex fftw_flops Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node FFTW MPI Performance Tips, Combining MPI and Threads, Avoiding MPI Deadlocks, Distributed-memory FFTW with MPI Chris@42: @section FFTW MPI Performance Tips Chris@42: Chris@42: In this section, we collect a few tips on getting the best performance Chris@42: out of FFTW's MPI transforms. Chris@42: Chris@42: First, because of the 1d block distribution, FFTW's parallelization is Chris@42: currently limited by the size of the first dimension. Chris@42: (Multidimensional block distributions may be supported by a future Chris@42: version.) More generally, you should ideally arrange the dimensions so Chris@42: that FFTW can divide them equally among the processes. @xref{Load Chris@42: balancing}. Chris@42: @cindex block distribution Chris@42: @cindex load balancing Chris@42: Chris@42: Chris@42: Second, if it is not too inconvenient, you should consider working Chris@42: with transposed output for multidimensional plans, as this saves a Chris@42: considerable amount of communications. @xref{Transposed distributions}. Chris@42: @cindex transpose Chris@42: Chris@42: Chris@42: Third, the fastest choices are generally either an in-place transform Chris@42: or an out-of-place transform with the @code{FFTW_DESTROY_INPUT} flag Chris@42: (which allows the input array to be used as scratch space). In-place Chris@42: is especially beneficial if the amount of data per process is large. Chris@42: @ctindex FFTW_DESTROY_INPUT Chris@42: Chris@42: Chris@42: Fourth, if you have multiple arrays to transform at once, rather than Chris@42: calling FFTW's MPI transforms several times it usually seems to be Chris@42: faster to interleave the data and use the advanced interface. (This Chris@42: groups the communications together instead of requiring separate Chris@42: messages for each transform.) Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node Combining MPI and Threads, FFTW MPI Reference, FFTW MPI Performance Tips, Distributed-memory FFTW with MPI Chris@42: @section Combining MPI and Threads Chris@42: @cindex threads Chris@42: Chris@42: In certain cases, it may be advantageous to combine MPI Chris@42: (distributed-memory) and threads (shared-memory) parallelization. Chris@42: FFTW supports this, with certain caveats. For example, if you have a Chris@42: cluster of 4-processor shared-memory nodes, you may want to use Chris@42: threads within the nodes and MPI between the nodes, instead of MPI for Chris@42: all parallelization. Chris@42: Chris@42: In particular, it is possible to seamlessly combine the MPI FFTW Chris@42: routines with the multi-threaded FFTW routines (@pxref{Multi-threaded Chris@42: FFTW}). However, some care must be taken in the initialization code, Chris@42: which should look something like this: Chris@42: Chris@42: @example Chris@42: int threads_ok; Chris@42: Chris@42: int main(int argc, char **argv) Chris@42: @{ Chris@42: int provided; Chris@42: MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided); Chris@42: threads_ok = provided >= MPI_THREAD_FUNNELED; Chris@42: Chris@42: if (threads_ok) threads_ok = fftw_init_threads(); Chris@42: fftw_mpi_init(); Chris@42: Chris@42: ... Chris@42: if (threads_ok) fftw_plan_with_nthreads(...); Chris@42: ... Chris@42: Chris@42: MPI_Finalize(); Chris@42: @} Chris@42: @end example Chris@42: @findex fftw_mpi_init Chris@42: @findex fftw_init_threads Chris@42: @findex fftw_plan_with_nthreads Chris@42: Chris@42: First, note that instead of calling @code{MPI_Init}, you should call Chris@42: @code{MPI_Init_threads}, which is the initialization routine defined Chris@42: by the MPI-2 standard to indicate to MPI that your program will be Chris@42: multithreaded. We pass @code{MPI_THREAD_FUNNELED}, which indicates Chris@42: that we will only call MPI routines from the main thread. (FFTW will Chris@42: launch additional threads internally, but the extra threads will not Chris@42: call MPI code.) (You may also pass @code{MPI_THREAD_SERIALIZED} or Chris@42: @code{MPI_THREAD_MULTIPLE}, which requests additional multithreading Chris@42: support from the MPI implementation, but this is not required by Chris@42: FFTW.) The @code{provided} parameter returns what level of threads Chris@42: support is actually supported by your MPI implementation; this Chris@42: @emph{must} be at least @code{MPI_THREAD_FUNNELED} if you want to call Chris@42: the FFTW threads routines, so we define a global variable Chris@42: @code{threads_ok} to record this. You should only call Chris@42: @code{fftw_init_threads} or @code{fftw_plan_with_nthreads} if Chris@42: @code{threads_ok} is true. For more information on thread safety in Chris@42: MPI, see the Chris@42: @uref{http://www.mpi-forum.org/docs/mpi-20-html/node162.htm, MPI and Chris@42: Threads} section of the MPI-2 standard. Chris@42: @cindex thread safety Chris@42: Chris@42: Chris@42: Second, we must call @code{fftw_init_threads} @emph{before} Chris@42: @code{fftw_mpi_init}. This is critical for technical reasons having Chris@42: to do with how FFTW initializes its list of algorithms. Chris@42: Chris@42: Then, if you call @code{fftw_plan_with_nthreads(N)}, @emph{every} MPI Chris@42: process will launch (up to) @code{N} threads to parallelize its transforms. Chris@42: Chris@42: For example, in the hypothetical cluster of 4-processor nodes, you Chris@42: might wish to launch only a single MPI process per node, and then call Chris@42: @code{fftw_plan_with_nthreads(4)} on each process to use all Chris@42: processors in the nodes. Chris@42: Chris@42: This may or may not be faster than simply using as many MPI processes Chris@42: as you have processors, however. On the one hand, using threads Chris@42: within a node eliminates the need for explicit message passing within Chris@42: the node. On the other hand, FFTW's transpose routines are not Chris@42: multi-threaded, and this means that the communications that do take Chris@42: place will not benefit from parallelization within the node. Chris@42: Moreover, many MPI implementations already have optimizations to Chris@42: exploit shared memory when it is available, so adding the Chris@42: multithreaded FFTW on top of this may be superfluous. Chris@42: @cindex transpose Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node FFTW MPI Reference, FFTW MPI Fortran Interface, Combining MPI and Threads, Distributed-memory FFTW with MPI Chris@42: @section FFTW MPI Reference Chris@42: Chris@42: This chapter provides a complete reference to all FFTW MPI functions, Chris@42: datatypes, and constants. See also @ref{FFTW Reference} for information Chris@42: on functions and types in common with the serial interface. Chris@42: Chris@42: @menu Chris@42: * MPI Files and Data Types:: Chris@42: * MPI Initialization:: Chris@42: * Using MPI Plans:: Chris@42: * MPI Data Distribution Functions:: Chris@42: * MPI Plan Creation:: Chris@42: * MPI Wisdom Communication:: Chris@42: @end menu Chris@42: Chris@42: @node MPI Files and Data Types, MPI Initialization, FFTW MPI Reference, FFTW MPI Reference Chris@42: @subsection MPI Files and Data Types Chris@42: Chris@42: All programs using FFTW's MPI support should include its header file: Chris@42: Chris@42: @example Chris@42: #include Chris@42: @end example Chris@42: Chris@42: Note that this header file includes the serial-FFTW @code{fftw3.h} Chris@42: header file, and also the @code{mpi.h} header file for MPI, so you Chris@42: need not include those files separately. Chris@42: Chris@42: You must also link to @emph{both} the FFTW MPI library and to the Chris@42: serial FFTW library. On Unix, this means adding @code{-lfftw3_mpi Chris@42: -lfftw3 -lm} at the end of the link command. Chris@42: Chris@42: @cindex precision Chris@42: Different precisions are handled as in the serial interface: Chris@42: @xref{Precision}. That is, @samp{fftw_} functions become Chris@42: @code{fftwf_} (in single precision) etcetera, and the libraries become Chris@42: @code{-lfftw3f_mpi -lfftw3f -lm} etcetera on Unix. Long-double Chris@42: precision is supported in MPI, but quad precision (@samp{fftwq_}) is Chris@42: not due to the lack of MPI support for this type. Chris@42: Chris@42: @node MPI Initialization, Using MPI Plans, MPI Files and Data Types, FFTW MPI Reference Chris@42: @subsection MPI Initialization Chris@42: Chris@42: Before calling any other FFTW MPI (@samp{fftw_mpi_}) function, and Chris@42: before importing any wisdom for MPI problems, you must call: Chris@42: Chris@42: @findex fftw_mpi_init Chris@42: @example Chris@42: void fftw_mpi_init(void); Chris@42: @end example Chris@42: Chris@42: @findex fftw_init_threads Chris@42: If FFTW threads support is used, however, @code{fftw_mpi_init} should Chris@42: be called @emph{after} @code{fftw_init_threads} (@pxref{Combining MPI Chris@42: and Threads}). Calling @code{fftw_mpi_init} additional times (before Chris@42: @code{fftw_mpi_cleanup}) has no effect. Chris@42: Chris@42: Chris@42: If you want to deallocate all persistent data and reset FFTW to the Chris@42: pristine state it was in when you started your program, you can call: Chris@42: Chris@42: @findex fftw_mpi_cleanup Chris@42: @example Chris@42: void fftw_mpi_cleanup(void); Chris@42: @end example Chris@42: Chris@42: @findex fftw_cleanup Chris@42: (This calls @code{fftw_cleanup}, so you need not call the serial Chris@42: cleanup routine too, although it is safe to do so.) After calling Chris@42: @code{fftw_mpi_cleanup}, all existing plans become undefined, and you Chris@42: should not attempt to execute or destroy them. You must call Chris@42: @code{fftw_mpi_init} again after @code{fftw_mpi_cleanup} if you want Chris@42: to resume using the MPI FFTW routines. Chris@42: Chris@42: @node Using MPI Plans, MPI Data Distribution Functions, MPI Initialization, FFTW MPI Reference Chris@42: @subsection Using MPI Plans Chris@42: Chris@42: Once an MPI plan is created, you can execute and destroy it using Chris@42: @code{fftw_execute}, @code{fftw_destroy_plan}, and the other functions Chris@42: in the serial interface that operate on generic plans (@pxref{Using Chris@42: Plans}). Chris@42: Chris@42: @cindex collective function Chris@42: @cindex MPI communicator Chris@42: The @code{fftw_execute} and @code{fftw_destroy_plan} functions, applied to Chris@42: MPI plans, are @emph{collective} calls: they must be called for all processes Chris@42: in the communicator that was used to create the plan. Chris@42: Chris@42: @cindex new-array execution Chris@42: You must @emph{not} use the serial new-array plan-execution functions Chris@42: @code{fftw_execute_dft} and so on (@pxref{New-array Execute Chris@42: Functions}) with MPI plans. Such functions are specialized to the Chris@42: problem type, and there are specific new-array execute functions for MPI plans: Chris@42: Chris@42: @findex fftw_mpi_execute_dft Chris@42: @findex fftw_mpi_execute_dft_r2c Chris@42: @findex fftw_mpi_execute_dft_c2r Chris@42: @findex fftw_mpi_execute_r2r Chris@42: @example Chris@42: void fftw_mpi_execute_dft(fftw_plan p, fftw_complex *in, fftw_complex *out); Chris@42: void fftw_mpi_execute_dft_r2c(fftw_plan p, double *in, fftw_complex *out); Chris@42: void fftw_mpi_execute_dft_c2r(fftw_plan p, fftw_complex *in, double *out); Chris@42: void fftw_mpi_execute_r2r(fftw_plan p, double *in, double *out); Chris@42: @end example Chris@42: Chris@42: @cindex alignment Chris@42: @findex fftw_malloc Chris@42: These functions have the same restrictions as those of the serial Chris@42: new-array execute functions. They are @emph{always} safe to apply to Chris@42: the @emph{same} @code{in} and @code{out} arrays that were used to Chris@42: create the plan. They can only be applied to new arrarys if those Chris@42: arrays have the same types, dimensions, in-placeness, and alignment as Chris@42: the original arrays, where the best way to ensure the same alignment Chris@42: is to use FFTW's @code{fftw_malloc} and related allocation functions Chris@42: for all arrays (@pxref{Memory Allocation}). Note that distributed Chris@42: transposes (@pxref{FFTW MPI Transposes}) use Chris@42: @code{fftw_mpi_execute_r2r}, since they count as rank-zero r2r plans Chris@42: from FFTW's perspective. Chris@42: Chris@42: @node MPI Data Distribution Functions, MPI Plan Creation, Using MPI Plans, FFTW MPI Reference Chris@42: @subsection MPI Data Distribution Functions Chris@42: Chris@42: @cindex data distribution Chris@42: As described above (@pxref{MPI Data Distribution}), in order to Chris@42: allocate your arrays, @emph{before} creating a plan, you must first Chris@42: call one of the following routines to determine the required Chris@42: allocation size and the portion of the array locally stored on a given Chris@42: process. The @code{MPI_Comm} communicator passed here must be Chris@42: equivalent to the communicator used below for plan creation. Chris@42: Chris@42: The basic interface for multidimensional transforms consists of the Chris@42: functions: Chris@42: Chris@42: @findex fftw_mpi_local_size_2d Chris@42: @findex fftw_mpi_local_size_3d Chris@42: @findex fftw_mpi_local_size Chris@42: @findex fftw_mpi_local_size_2d_transposed Chris@42: @findex fftw_mpi_local_size_3d_transposed Chris@42: @findex fftw_mpi_local_size_transposed Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@42: ptrdiff_t fftw_mpi_local_size_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@42: MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@42: ptrdiff_t fftw_mpi_local_size(int rnk, const ptrdiff_t *n, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@42: Chris@42: ptrdiff_t fftw_mpi_local_size_2d_transposed(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42: ptrdiff_t fftw_mpi_local_size_3d_transposed(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@42: MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42: ptrdiff_t fftw_mpi_local_size_transposed(int rnk, const ptrdiff_t *n, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42: @end example Chris@42: Chris@42: These functions return the number of elements to allocate (complex Chris@42: numbers for DFT/r2c/c2r plans, real numbers for r2r plans), whereas Chris@42: the @code{local_n0} and @code{local_0_start} return the portion Chris@42: (@code{local_0_start} to @code{local_0_start + local_n0 - 1}) of the Chris@42: first dimension of an @ndims{} array that is stored on the local Chris@42: process. @xref{Basic and advanced distribution interfaces}. For Chris@42: @code{FFTW_MPI_TRANSPOSED_OUT} plans, the @samp{_transposed} variants Chris@42: are useful in order to also return the local portion of the first Chris@42: dimension in the @ndimstrans{} transposed output. Chris@42: @xref{Transposed distributions}. Chris@42: The advanced interface for multidimensional transforms is: Chris@42: Chris@42: @cindex advanced interface Chris@42: @findex fftw_mpi_local_size_many Chris@42: @findex fftw_mpi_local_size_many_transposed Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@42: ptrdiff_t block0, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@42: ptrdiff_t fftw_mpi_local_size_many_transposed(int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@42: ptrdiff_t block0, ptrdiff_t block1, MPI_Comm comm, Chris@42: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@42: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@42: @end example Chris@42: Chris@42: These differ from the basic interface in only two ways. First, they Chris@42: allow you to specify block sizes @code{block0} and @code{block1} (the Chris@42: latter for the transposed output); you can pass Chris@42: @code{FFTW_MPI_DEFAULT_BLOCK} to use FFTW's default block size as in Chris@42: the basic interface. Second, you can pass a @code{howmany} parameter, Chris@42: corresponding to the advanced planning interface below: this is for Chris@42: transforms of contiguous @code{howmany}-tuples of numbers Chris@42: (@code{howmany = 1} in the basic interface). Chris@42: Chris@42: The corresponding basic and advanced routines for one-dimensional Chris@42: transforms (currently only complex DFTs) are: Chris@42: Chris@42: @findex fftw_mpi_local_size_1d Chris@42: @findex fftw_mpi_local_size_many_1d Chris@42: @example Chris@42: ptrdiff_t fftw_mpi_local_size_1d( Chris@42: ptrdiff_t n0, MPI_Comm comm, int sign, unsigned flags, Chris@42: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, Chris@42: ptrdiff_t *local_no, ptrdiff_t *local_o_start); Chris@42: ptrdiff_t fftw_mpi_local_size_many_1d( Chris@42: ptrdiff_t n0, ptrdiff_t howmany, Chris@42: MPI_Comm comm, int sign, unsigned flags, Chris@42: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, Chris@42: ptrdiff_t *local_no, ptrdiff_t *local_o_start); Chris@42: @end example Chris@42: Chris@42: @ctindex FFTW_MPI_SCRAMBLED_OUT Chris@42: @ctindex FFTW_MPI_SCRAMBLED_IN Chris@42: As above, the return value is the number of elements to allocate Chris@42: (complex numbers, for complex DFTs). The @code{local_ni} and Chris@42: @code{local_i_start} arguments return the portion Chris@42: (@code{local_i_start} to @code{local_i_start + local_ni - 1}) of the Chris@42: 1d array that is stored on this process for the transform Chris@42: @emph{input}, and @code{local_no} and @code{local_o_start} are the Chris@42: corresponding quantities for the input. The @code{sign} Chris@42: (@code{FFTW_FORWARD} or @code{FFTW_BACKWARD}) and @code{flags} must Chris@42: match the arguments passed when creating a plan. Although the inputs Chris@42: and outputs have different data distributions in general, it is Chris@42: guaranteed that the @emph{output} data distribution of an Chris@42: @code{FFTW_FORWARD} plan will match the @emph{input} data distribution Chris@42: of an @code{FFTW_BACKWARD} plan and vice versa; similarly for the Chris@42: @code{FFTW_MPI_SCRAMBLED_OUT} and @code{FFTW_MPI_SCRAMBLED_IN} flags. Chris@42: @xref{One-dimensional distributions}. Chris@42: Chris@42: @node MPI Plan Creation, MPI Wisdom Communication, MPI Data Distribution Functions, FFTW MPI Reference Chris@42: @subsection MPI Plan Creation Chris@42: Chris@42: @subsubheading Complex-data MPI DFTs Chris@42: Chris@42: Plans for complex-data DFTs (@pxref{2d MPI example}) are created by: Chris@42: Chris@42: @findex fftw_mpi_plan_dft_1d Chris@42: @findex fftw_mpi_plan_dft_2d Chris@42: @findex fftw_mpi_plan_dft_3d Chris@42: @findex fftw_mpi_plan_dft Chris@42: @findex fftw_mpi_plan_many_dft Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_dft_1d(ptrdiff_t n0, fftw_complex *in, fftw_complex *out, Chris@42: MPI_Comm comm, int sign, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@42: fftw_complex *in, fftw_complex *out, Chris@42: MPI_Comm comm, int sign, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@42: fftw_complex *in, fftw_complex *out, Chris@42: MPI_Comm comm, int sign, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft(int rnk, const ptrdiff_t *n, Chris@42: fftw_complex *in, fftw_complex *out, Chris@42: MPI_Comm comm, int sign, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_many_dft(int rnk, const ptrdiff_t *n, Chris@42: ptrdiff_t howmany, ptrdiff_t block, ptrdiff_t tblock, Chris@42: fftw_complex *in, fftw_complex *out, Chris@42: MPI_Comm comm, int sign, unsigned flags); Chris@42: @end example Chris@42: Chris@42: @cindex MPI communicator Chris@42: @cindex collective function Chris@42: These are similar to their serial counterparts (@pxref{Complex DFTs}) Chris@42: in specifying the dimensions, sign, and flags of the transform. The Chris@42: @code{comm} argument gives an MPI communicator that specifies the set Chris@42: of processes to participate in the transform; plan creation is a Chris@42: collective function that must be called for all processes in the Chris@42: communicator. The @code{in} and @code{out} pointers refer only to a Chris@42: portion of the overall transform data (@pxref{MPI Data Distribution}) Chris@42: as specified by the @samp{local_size} functions in the previous Chris@42: section. Unless @code{flags} contains @code{FFTW_ESTIMATE}, these Chris@42: arrays are overwritten during plan creation as for the serial Chris@42: interface. For multi-dimensional transforms, any dimensions @code{> Chris@42: 1} are supported; for one-dimensional transforms, only composite Chris@42: (non-prime) @code{n0} are currently supported (unlike the serial Chris@42: FFTW). Requesting an unsupported transform size will yield a Chris@42: @code{NULL} plan. (As in the serial interface, highly composite sizes Chris@42: generally yield the best performance.) Chris@42: Chris@42: @cindex advanced interface Chris@42: @ctindex FFTW_MPI_DEFAULT_BLOCK Chris@42: @cindex stride Chris@42: The advanced-interface @code{fftw_mpi_plan_many_dft} additionally Chris@42: allows you to specify the block sizes for the first dimension Chris@42: (@code{block}) of the @ndims{} input data and the first dimension Chris@42: (@code{tblock}) of the @ndimstrans{} transposed data (at intermediate Chris@42: steps of the transform, and for the output if Chris@42: @code{FFTW_TRANSPOSED_OUT} is specified in @code{flags}). These must Chris@42: be the same block sizes as were passed to the corresponding Chris@42: @samp{local_size} function; you can pass @code{FFTW_MPI_DEFAULT_BLOCK} Chris@42: to use FFTW's default block size as in the basic interface. Also, the Chris@42: @code{howmany} parameter specifies that the transform is of contiguous Chris@42: @code{howmany}-tuples rather than individual complex numbers; this Chris@42: corresponds to the same parameter in the serial advanced interface Chris@42: (@pxref{Advanced Complex DFTs}) with @code{stride = howmany} and Chris@42: @code{dist = 1}. Chris@42: Chris@42: @subsubheading MPI flags Chris@42: Chris@42: The @code{flags} can be any of those for the serial FFTW Chris@42: (@pxref{Planner Flags}), and in addition may include one or more of Chris@42: the following MPI-specific flags, which improve performance at the Chris@42: cost of changing the output or input data formats. Chris@42: Chris@42: @itemize @bullet Chris@42: Chris@42: @item Chris@42: @ctindex FFTW_MPI_SCRAMBLED_OUT Chris@42: @ctindex FFTW_MPI_SCRAMBLED_IN Chris@42: @code{FFTW_MPI_SCRAMBLED_OUT}, @code{FFTW_MPI_SCRAMBLED_IN}: valid for Chris@42: 1d transforms only, these flags indicate that the output/input of the Chris@42: transform are in an undocumented ``scrambled'' order. A forward Chris@42: @code{FFTW_MPI_SCRAMBLED_OUT} transform can be inverted by a backward Chris@42: @code{FFTW_MPI_SCRAMBLED_IN} (times the usual 1/@i{N} normalization). Chris@42: @xref{One-dimensional distributions}. Chris@42: Chris@42: @item Chris@42: @ctindex FFTW_MPI_TRANSPOSED_OUT Chris@42: @ctindex FFTW_MPI_TRANSPOSED_IN Chris@42: @code{FFTW_MPI_TRANSPOSED_OUT}, @code{FFTW_MPI_TRANSPOSED_IN}: valid Chris@42: for multidimensional (@code{rnk > 1}) transforms only, these flags Chris@42: specify that the output or input of an @ndims{} transform is Chris@42: transposed to @ndimstrans{}. @xref{Transposed distributions}. Chris@42: Chris@42: @end itemize Chris@42: Chris@42: @subsubheading Real-data MPI DFTs Chris@42: Chris@42: @cindex r2c Chris@42: Plans for real-input/output (r2c/c2r) DFTs (@pxref{Multi-dimensional Chris@42: MPI DFTs of Real Data}) are created by: Chris@42: Chris@42: @findex fftw_mpi_plan_dft_r2c_2d Chris@42: @findex fftw_mpi_plan_dft_r2c_2d Chris@42: @findex fftw_mpi_plan_dft_r2c_3d Chris@42: @findex fftw_mpi_plan_dft_r2c Chris@42: @findex fftw_mpi_plan_dft_c2r_2d Chris@42: @findex fftw_mpi_plan_dft_c2r_2d Chris@42: @findex fftw_mpi_plan_dft_c2r_3d Chris@42: @findex fftw_mpi_plan_dft_c2r Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_dft_r2c_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@42: double *in, fftw_complex *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_r2c_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@42: double *in, fftw_complex *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_r2c_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@42: double *in, fftw_complex *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_r2c(int rnk, const ptrdiff_t *n, Chris@42: double *in, fftw_complex *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_c2r_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@42: fftw_complex *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_c2r_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@42: fftw_complex *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_c2r_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@42: fftw_complex *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_dft_c2r(int rnk, const ptrdiff_t *n, Chris@42: fftw_complex *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: @end example Chris@42: Chris@42: Similar to the serial interface (@pxref{Real-data DFTs}), these Chris@42: transform logically @ndims{} real data to/from @ndimshalf{} complex Chris@42: data, representing the non-redundant half of the conjugate-symmetry Chris@42: output of a real-input DFT (@pxref{Multi-dimensional Transforms}). Chris@42: However, the real array must be stored within a padded @ndimspad{} Chris@42: array (much like the in-place serial r2c transforms, but here for Chris@42: out-of-place transforms as well). Currently, only multi-dimensional Chris@42: (@code{rnk > 1}) r2c/c2r transforms are supported (requesting a plan Chris@42: for @code{rnk = 1} will yield @code{NULL}). As explained above Chris@42: (@pxref{Multi-dimensional MPI DFTs of Real Data}), the data Chris@42: distribution of both the real and complex arrays is given by the Chris@42: @samp{local_size} function called for the dimensions of the Chris@42: @emph{complex} array. Similar to the other planning functions, the Chris@42: input and output arrays are overwritten when the plan is created Chris@42: except in @code{FFTW_ESTIMATE} mode. Chris@42: Chris@42: As for the complex DFTs above, there is an advance interface that Chris@42: allows you to manually specify block sizes and to transform contiguous Chris@42: @code{howmany}-tuples of real/complex numbers: Chris@42: Chris@42: @findex fftw_mpi_plan_many_dft_r2c Chris@42: @findex fftw_mpi_plan_many_dft_c2r Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_many_dft_r2c Chris@42: (int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@42: ptrdiff_t iblock, ptrdiff_t oblock, Chris@42: double *in, fftw_complex *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_many_dft_c2r Chris@42: (int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@42: ptrdiff_t iblock, ptrdiff_t oblock, Chris@42: fftw_complex *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: @end example Chris@42: Chris@42: @subsubheading MPI r2r transforms Chris@42: Chris@42: @cindex r2r Chris@42: There are corresponding plan-creation routines for r2r Chris@42: transforms (@pxref{More DFTs of Real Data}), currently supporting Chris@42: multidimensional (@code{rnk > 1}) transforms only (@code{rnk = 1} will Chris@42: yield a @code{NULL} plan): Chris@42: Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_r2r_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, Chris@42: fftw_r2r_kind kind0, fftw_r2r_kind kind1, Chris@42: unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_r2r_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, Chris@42: fftw_r2r_kind kind0, fftw_r2r_kind kind1, fftw_r2r_kind kind2, Chris@42: unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_r2r(int rnk, const ptrdiff_t *n, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, const fftw_r2r_kind *kind, Chris@42: unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_many_r2r(int rnk, const ptrdiff_t *n, Chris@42: ptrdiff_t iblock, ptrdiff_t oblock, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, const fftw_r2r_kind *kind, Chris@42: unsigned flags); Chris@42: @end example Chris@42: Chris@42: The parameters are much the same as for the complex DFTs above, except Chris@42: that the arrays are of real numbers (and hence the outputs of the Chris@42: @samp{local_size} data-distribution functions should be interpreted as Chris@42: counts of real rather than complex numbers). Also, the @code{kind} Chris@42: parameters specify the r2r kinds along each dimension as for the Chris@42: serial interface (@pxref{Real-to-Real Transform Kinds}). @xref{Other Chris@42: Multi-dimensional Real-data MPI Transforms}. Chris@42: Chris@42: @subsubheading MPI transposition Chris@42: @cindex transpose Chris@42: Chris@42: FFTW also provides routines to plan a transpose of a distributed Chris@42: @code{n0} by @code{n1} array of real numbers, or an array of Chris@42: @code{howmany}-tuples of real numbers with specified block sizes Chris@42: (@pxref{FFTW MPI Transposes}): Chris@42: Chris@42: @findex fftw_mpi_plan_transpose Chris@42: @findex fftw_mpi_plan_many_transpose Chris@42: @example Chris@42: fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1, Chris@42: double *in, double *out, Chris@42: MPI_Comm comm, unsigned flags); Chris@42: fftw_plan fftw_mpi_plan_many_transpose Chris@42: (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany, Chris@42: ptrdiff_t block0, ptrdiff_t block1, Chris@42: double *in, double *out, MPI_Comm comm, unsigned flags); Chris@42: @end example Chris@42: Chris@42: @cindex new-array execution Chris@42: @findex fftw_mpi_execute_r2r Chris@42: These plans are used with the @code{fftw_mpi_execute_r2r} new-array Chris@42: execute function (@pxref{Using MPI Plans }), since they count as (rank Chris@42: zero) r2r plans from FFTW's perspective. Chris@42: Chris@42: @node MPI Wisdom Communication, , MPI Plan Creation, FFTW MPI Reference Chris@42: @subsection MPI Wisdom Communication Chris@42: Chris@42: To facilitate synchronizing wisdom among the different MPI processes, Chris@42: we provide two functions: Chris@42: Chris@42: @findex fftw_mpi_gather_wisdom Chris@42: @findex fftw_mpi_broadcast_wisdom Chris@42: @example Chris@42: void fftw_mpi_gather_wisdom(MPI_Comm comm); Chris@42: void fftw_mpi_broadcast_wisdom(MPI_Comm comm); Chris@42: @end example Chris@42: Chris@42: The @code{fftw_mpi_gather_wisdom} function gathers all wisdom in the Chris@42: given communicator @code{comm} to the process of rank 0 in the Chris@42: communicator: that process obtains the union of all wisdom on all the Chris@42: processes. As a side effect, some other processes will gain Chris@42: additional wisdom from other processes, but only process 0 will gain Chris@42: the complete union. Chris@42: Chris@42: The @code{fftw_mpi_broadcast_wisdom} does the reverse: it exports Chris@42: wisdom from process 0 in @code{comm} to all other processes in the Chris@42: communicator, replacing any wisdom they currently have. Chris@42: Chris@42: @xref{FFTW MPI Wisdom}. Chris@42: Chris@42: @c ------------------------------------------------------------ Chris@42: @node FFTW MPI Fortran Interface, , FFTW MPI Reference, Distributed-memory FFTW with MPI Chris@42: @section FFTW MPI Fortran Interface Chris@42: @cindex Fortran interface Chris@42: Chris@42: @cindex iso_c_binding Chris@42: The FFTW MPI interface is callable from modern Fortran compilers Chris@42: supporting the Fortran 2003 @code{iso_c_binding} standard for calling Chris@42: C functions. As described in @ref{Calling FFTW from Modern Fortran}, Chris@42: this means that you can directly call FFTW's C interface from Fortran Chris@42: with only minor changes in syntax. There are, however, a few things Chris@42: specific to the MPI interface to keep in mind: Chris@42: Chris@42: @itemize @bullet Chris@42: Chris@42: @item Chris@42: Instead of including @code{fftw3.f03} as in @ref{Overview of Fortran Chris@42: interface }, you should @code{include 'fftw3-mpi.f03'} (after Chris@42: @code{use, intrinsic :: iso_c_binding} as before). The Chris@42: @code{fftw3-mpi.f03} file includes @code{fftw3.f03}, so you should Chris@42: @emph{not} @code{include} them both yourself. (You will also want to Chris@42: include the MPI header file, usually via @code{include 'mpif.h'} or Chris@42: similar, although though this is not needed by @code{fftw3-mpi.f03} Chris@42: @i{per se}.) (To use the @samp{fftwl_} @code{long double} extended-precision routines in supporting compilers, you should include @code{fftw3f-mpi.f03} in @emph{addition} to @code{fftw3-mpi.f03}. @xref{Extended and quadruple precision in Fortran}.) Chris@42: Chris@42: @item Chris@42: Because of the different storage conventions between C and Fortran, Chris@42: you reverse the order of your array dimensions when passing them to Chris@42: FFTW (@pxref{Reversing array dimensions}). This is merely a Chris@42: difference in notation and incurs no performance overhead. However, Chris@42: it means that, whereas in C the @emph{first} dimension is distributed, Chris@42: in Fortran the @emph{last} dimension of your array is distributed. Chris@42: Chris@42: @item Chris@42: @cindex MPI communicator Chris@42: In Fortran, communicators are stored as @code{integer} types; there is Chris@42: no @code{MPI_Comm} type, nor is there any way to access a C Chris@42: @code{MPI_Comm}. Fortunately, this is taken care of for you by the Chris@42: FFTW Fortran interface: whenever the C interface expects an Chris@42: @code{MPI_Comm} type, you should pass the Fortran communicator as an Chris@42: @code{integer}.@footnote{Technically, this is because you aren't Chris@42: actually calling the C functions directly. You are calling wrapper Chris@42: functions that translate the communicator with @code{MPI_Comm_f2c} Chris@42: before calling the ordinary C interface. This is all done Chris@42: transparently, however, since the @code{fftw3-mpi.f03} interface file Chris@42: renames the wrappers so that they are called in Fortran with the same Chris@42: names as the C interface functions.} Chris@42: Chris@42: @item Chris@42: Because you need to call the @samp{local_size} function to find out Chris@42: how much space to allocate, and this may be @emph{larger} than the Chris@42: local portion of the array (@pxref{MPI Data Distribution}), you should Chris@42: @emph{always} allocate your arrays dynamically using FFTW's allocation Chris@42: routines as described in @ref{Allocating aligned memory in Fortran}. Chris@42: (Coincidentally, this also provides the best performance by Chris@42: guaranteeding proper data alignment.) Chris@42: Chris@42: @item Chris@42: Because all sizes in the MPI FFTW interface are declared as Chris@42: @code{ptrdiff_t} in C, you should use @code{integer(C_INTPTR_T)} in Chris@42: Fortran (@pxref{FFTW Fortran type reference}). Chris@42: Chris@42: @item Chris@42: @findex fftw_execute_dft Chris@42: @findex fftw_mpi_execute_dft Chris@42: @cindex new-array execution Chris@42: In Fortran, because of the language semantics, we generally recommend Chris@42: using the new-array execute functions for all plans, even in the Chris@42: common case where you are executing the plan on the same arrays for Chris@42: which the plan was created (@pxref{Plan execution in Fortran}). Chris@42: However, note that in the MPI interface these functions are changed: Chris@42: @code{fftw_execute_dft} becomes @code{fftw_mpi_execute_dft}, Chris@42: etcetera. @xref{Using MPI Plans}. Chris@42: Chris@42: @end itemize Chris@42: Chris@42: For example, here is a Fortran code snippet to perform a distributed Chris@42: @twodims{L,M} complex DFT in-place. (This assumes you have already Chris@42: initialized MPI with @code{MPI_init} and have also performed Chris@42: @code{call fftw_mpi_init}.) Chris@42: Chris@42: @example Chris@42: use, intrinsic :: iso_c_binding Chris@42: include 'fftw3-mpi.f03' Chris@42: integer(C_INTPTR_T), parameter :: L = ... Chris@42: integer(C_INTPTR_T), parameter :: M = ... Chris@42: type(C_PTR) :: plan, cdata Chris@42: complex(C_DOUBLE_COMPLEX), pointer :: data(:,:) Chris@42: integer(C_INTPTR_T) :: i, j, alloc_local, local_M, local_j_offset Chris@42: Chris@42: ! @r{get local data size and allocate (note dimension reversal)} Chris@42: alloc_local = fftw_mpi_local_size_2d(M, L, MPI_COMM_WORLD, & Chris@42: local_M, local_j_offset) Chris@42: cdata = fftw_alloc_complex(alloc_local) Chris@42: call c_f_pointer(cdata, data, [L,local_M]) Chris@42: Chris@42: ! @r{create MPI plan for in-place forward DFT (note dimension reversal)} Chris@42: plan = fftw_mpi_plan_dft_2d(M, L, data, data, MPI_COMM_WORLD, & Chris@42: FFTW_FORWARD, FFTW_MEASURE) Chris@42: Chris@42: ! @r{initialize data to some function} my_function(i,j) Chris@42: do j = 1, local_M Chris@42: do i = 1, L Chris@42: data(i, j) = my_function(i, j + local_j_offset) Chris@42: end do Chris@42: end do Chris@42: Chris@42: ! @r{compute transform (as many times as desired)} Chris@42: call fftw_mpi_execute_dft(plan, data, data) Chris@42: Chris@42: call fftw_destroy_plan(plan) Chris@42: call fftw_free(cdata) Chris@42: @end example Chris@42: Chris@42: Note that when we called @code{fftw_mpi_local_size_2d} and Chris@42: @code{fftw_mpi_plan_dft_2d} with the dimensions in reversed order, Chris@42: since a @twodims{L,M} Fortran array is viewed by FFTW in C as a Chris@42: @twodims{M, L} array. This means that the array was distributed over Chris@42: the @code{M} dimension, the local portion of which is a Chris@42: @twodims{L,local_M} array in Fortran. (You must @emph{not} use an Chris@42: @code{allocate} statement to allocate an @twodims{L,local_M} array, Chris@42: however; you must allocate @code{alloc_local} complex numbers, which Chris@42: may be greater than @code{L * local_M}, in order to reserve space for Chris@42: intermediate steps of the transform.) Finally, we mention that Chris@42: because C's array indices are zero-based, the @code{local_j_offset} Chris@42: argument can conveniently be interpreted as an offset in the 1-based Chris@42: @code{j} index (rather than as a starting index as in C). Chris@42: Chris@42: If instead you had used the @code{ior(FFTW_MEASURE, Chris@42: FFTW_MPI_TRANSPOSED_OUT)} flag, the output of the transform would be a Chris@42: transposed @twodims{M,local_L} array, associated with the @emph{same} Chris@42: @code{cdata} allocation (since the transform is in-place), and which Chris@42: you could declare with: Chris@42: Chris@42: @example Chris@42: complex(C_DOUBLE_COMPLEX), pointer :: tdata(:,:) Chris@42: ... Chris@42: call c_f_pointer(cdata, tdata, [M,local_L]) Chris@42: @end example Chris@42: Chris@42: where @code{local_L} would have been obtained by changing the Chris@42: @code{fftw_mpi_local_size_2d} call to: Chris@42: Chris@42: @example Chris@42: alloc_local = fftw_mpi_local_size_2d_transposed(M, L, MPI_COMM_WORLD, & Chris@42: local_M, local_j_offset, local_L, local_i_offset) Chris@42: @end example