annotate fft/fftw/fftw-3.3.4/doc/mpi.texi @ 40:223f770b5341 kissfft-double tip

Try a double-precision kissfft
author Chris Cannam
date Wed, 07 Sep 2016 10:40:32 +0100
parents 26056e866c29
children
rev   line source
Chris@19 1 @node Distributed-memory FFTW with MPI, Calling FFTW from Modern Fortran, Multi-threaded FFTW, Top
Chris@19 2 @chapter Distributed-memory FFTW with MPI
Chris@19 3 @cindex MPI
Chris@19 4
Chris@19 5 @cindex parallel transform
Chris@19 6 In this chapter we document the parallel FFTW routines for parallel
Chris@19 7 systems supporting the MPI message-passing interface. Unlike the
Chris@19 8 shared-memory threads described in the previous chapter, MPI allows
Chris@19 9 you to use @emph{distributed-memory} parallelism, where each CPU has
Chris@19 10 its own separate memory, and which can scale up to clusters of many
Chris@19 11 thousands of processors. This capability comes at a price, however:
Chris@19 12 each process only stores a @emph{portion} of the data to be
Chris@19 13 transformed, which means that the data structures and
Chris@19 14 programming-interface are quite different from the serial or threads
Chris@19 15 versions of FFTW.
Chris@19 16 @cindex data distribution
Chris@19 17
Chris@19 18
Chris@19 19 Distributed-memory parallelism is especially useful when you are
Chris@19 20 transforming arrays so large that they do not fit into the memory of a
Chris@19 21 single processor. The storage per-process required by FFTW's MPI
Chris@19 22 routines is proportional to the total array size divided by the number
Chris@19 23 of processes. Conversely, distributed-memory parallelism can easily
Chris@19 24 pose an unacceptably high communications overhead for small problems;
Chris@19 25 the threshold problem size for which parallelism becomes advantageous
Chris@19 26 will depend on the precise problem you are interested in, your
Chris@19 27 hardware, and your MPI implementation.
Chris@19 28
Chris@19 29 A note on terminology: in MPI, you divide the data among a set of
Chris@19 30 ``processes'' which each run in their own memory address space.
Chris@19 31 Generally, each process runs on a different physical processor, but
Chris@19 32 this is not required. A set of processes in MPI is described by an
Chris@19 33 opaque data structure called a ``communicator,'' the most common of
Chris@19 34 which is the predefined communicator @code{MPI_COMM_WORLD} which
Chris@19 35 refers to @emph{all} processes. For more information on these and
Chris@19 36 other concepts common to all MPI programs, we refer the reader to the
Chris@19 37 documentation at @uref{http://www.mcs.anl.gov/research/projects/mpi/, the MPI home
Chris@19 38 page}.
Chris@19 39 @cindex MPI communicator
Chris@19 40 @ctindex MPI_COMM_WORLD
Chris@19 41
Chris@19 42
Chris@19 43 We assume in this chapter that the reader is familiar with the usage
Chris@19 44 of the serial (uniprocessor) FFTW, and focus only on the concepts new
Chris@19 45 to the MPI interface.
Chris@19 46
Chris@19 47 @menu
Chris@19 48 * FFTW MPI Installation::
Chris@19 49 * Linking and Initializing MPI FFTW::
Chris@19 50 * 2d MPI example::
Chris@19 51 * MPI Data Distribution::
Chris@19 52 * Multi-dimensional MPI DFTs of Real Data::
Chris@19 53 * Other Multi-dimensional Real-data MPI Transforms::
Chris@19 54 * FFTW MPI Transposes::
Chris@19 55 * FFTW MPI Wisdom::
Chris@19 56 * Avoiding MPI Deadlocks::
Chris@19 57 * FFTW MPI Performance Tips::
Chris@19 58 * Combining MPI and Threads::
Chris@19 59 * FFTW MPI Reference::
Chris@19 60 * FFTW MPI Fortran Interface::
Chris@19 61 @end menu
Chris@19 62
Chris@19 63 @c ------------------------------------------------------------
Chris@19 64 @node FFTW MPI Installation, Linking and Initializing MPI FFTW, Distributed-memory FFTW with MPI, Distributed-memory FFTW with MPI
Chris@19 65 @section FFTW MPI Installation
Chris@19 66
Chris@19 67 All of the FFTW MPI code is located in the @code{mpi} subdirectory of
Chris@19 68 the FFTW package. On Unix systems, the FFTW MPI libraries and header
Chris@19 69 files are automatically configured, compiled, and installed along with
Chris@19 70 the uniprocessor FFTW libraries simply by including
Chris@19 71 @code{--enable-mpi} in the flags to the @code{configure} script
Chris@19 72 (@pxref{Installation on Unix}).
Chris@19 73 @fpindex configure
Chris@19 74
Chris@19 75
Chris@19 76 Any implementation of the MPI standard, version 1 or later, should
Chris@19 77 work with FFTW. The @code{configure} script will attempt to
Chris@19 78 automatically detect how to compile and link code using your MPI
Chris@19 79 implementation. In some cases, especially if you have multiple
Chris@19 80 different MPI implementations installed or have an unusual MPI
Chris@19 81 software package, you may need to provide this information explicitly.
Chris@19 82
Chris@19 83 Most commonly, one compiles MPI code by invoking a special compiler
Chris@19 84 command, typically @code{mpicc} for C code. The @code{configure}
Chris@19 85 script knows the most common names for this command, but you can
Chris@19 86 specify the MPI compilation command explicitly by setting the
Chris@19 87 @code{MPICC} variable, as in @samp{./configure MPICC=mpicc ...}.
Chris@19 88 @fpindex mpicc
Chris@19 89
Chris@19 90
Chris@19 91 If, instead of a special compiler command, you need to link a certain
Chris@19 92 library, you can specify the link command via the @code{MPILIBS}
Chris@19 93 variable, as in @samp{./configure MPILIBS=-lmpi ...}. Note that if
Chris@19 94 your MPI library is installed in a non-standard location (one the
Chris@19 95 compiler does not know about by default), you may also have to specify
Chris@19 96 the location of the library and header files via @code{LDFLAGS} and
Chris@19 97 @code{CPPFLAGS} variables, respectively, as in @samp{./configure
Chris@19 98 LDFLAGS=-L/path/to/mpi/libs CPPFLAGS=-I/path/to/mpi/include ...}.
Chris@19 99
Chris@19 100 @c ------------------------------------------------------------
Chris@19 101 @node Linking and Initializing MPI FFTW, 2d MPI example, FFTW MPI Installation, Distributed-memory FFTW with MPI
Chris@19 102 @section Linking and Initializing MPI FFTW
Chris@19 103
Chris@19 104 Programs using the MPI FFTW routines should be linked with
Chris@19 105 @code{-lfftw3_mpi -lfftw3 -lm} on Unix in double precision,
Chris@19 106 @code{-lfftw3f_mpi -lfftw3f -lm} in single precision, and so on
Chris@19 107 (@pxref{Precision}). You will also need to link with whatever library
Chris@19 108 is responsible for MPI on your system; in most MPI implementations,
Chris@19 109 there is a special compiler alias named @code{mpicc} to compile and
Chris@19 110 link MPI code.
Chris@19 111 @fpindex mpicc
Chris@19 112 @cindex linking on Unix
Chris@19 113 @cindex precision
Chris@19 114
Chris@19 115
Chris@19 116 @findex fftw_init_threads
Chris@19 117 Before calling any FFTW routines except possibly
Chris@19 118 @code{fftw_init_threads} (@pxref{Combining MPI and Threads}), but after calling
Chris@19 119 @code{MPI_Init}, you should call the function:
Chris@19 120
Chris@19 121 @example
Chris@19 122 void fftw_mpi_init(void);
Chris@19 123 @end example
Chris@19 124 @findex fftw_mpi_init
Chris@19 125
Chris@19 126 If, at the end of your program, you want to get rid of all memory and
Chris@19 127 other resources allocated internally by FFTW, for both the serial and
Chris@19 128 MPI routines, you can call:
Chris@19 129
Chris@19 130 @example
Chris@19 131 void fftw_mpi_cleanup(void);
Chris@19 132 @end example
Chris@19 133 @findex fftw_mpi_cleanup
Chris@19 134
Chris@19 135 which is much like the @code{fftw_cleanup()} function except that it
Chris@19 136 also gets rid of FFTW's MPI-related data. You must @emph{not} execute
Chris@19 137 any previously created plans after calling this function.
Chris@19 138
Chris@19 139 @c ------------------------------------------------------------
Chris@19 140 @node 2d MPI example, MPI Data Distribution, Linking and Initializing MPI FFTW, Distributed-memory FFTW with MPI
Chris@19 141 @section 2d MPI example
Chris@19 142
Chris@19 143 Before we document the FFTW MPI interface in detail, we begin with a
Chris@19 144 simple example outlining how one would perform a two-dimensional
Chris@19 145 @code{N0} by @code{N1} complex DFT.
Chris@19 146
Chris@19 147 @example
Chris@19 148 #include <fftw3-mpi.h>
Chris@19 149
Chris@19 150 int main(int argc, char **argv)
Chris@19 151 @{
Chris@19 152 const ptrdiff_t N0 = ..., N1 = ...;
Chris@19 153 fftw_plan plan;
Chris@19 154 fftw_complex *data;
Chris@19 155 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
Chris@19 156
Chris@19 157 MPI_Init(&argc, &argv);
Chris@19 158 fftw_mpi_init();
Chris@19 159
Chris@19 160 /* @r{get local data size and allocate} */
Chris@19 161 alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
Chris@19 162 &local_n0, &local_0_start);
Chris@19 163 data = fftw_alloc_complex(alloc_local);
Chris@19 164
Chris@19 165 /* @r{create plan for in-place forward DFT} */
Chris@19 166 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
Chris@19 167 FFTW_FORWARD, FFTW_ESTIMATE);
Chris@19 168
Chris@19 169 /* @r{initialize data to some function} my_function(x,y) */
Chris@19 170 for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j)
Chris@19 171 data[i*N1 + j] = my_function(local_0_start + i, j);
Chris@19 172
Chris@19 173 /* @r{compute transforms, in-place, as many times as desired} */
Chris@19 174 fftw_execute(plan);
Chris@19 175
Chris@19 176 fftw_destroy_plan(plan);
Chris@19 177
Chris@19 178 MPI_Finalize();
Chris@19 179 @}
Chris@19 180 @end example
Chris@19 181
Chris@19 182 As can be seen above, the MPI interface follows the same basic style
Chris@19 183 of allocate/plan/execute/destroy as the serial FFTW routines. All of
Chris@19 184 the MPI-specific routines are prefixed with @samp{fftw_mpi_} instead
Chris@19 185 of @samp{fftw_}. There are a few important differences, however:
Chris@19 186
Chris@19 187 First, we must call @code{fftw_mpi_init()} after calling
Chris@19 188 @code{MPI_Init} (required in all MPI programs) and before calling any
Chris@19 189 other @samp{fftw_mpi_} routine.
Chris@19 190 @findex MPI_Init
Chris@19 191 @findex fftw_mpi_init
Chris@19 192
Chris@19 193
Chris@19 194 Second, when we create the plan with @code{fftw_mpi_plan_dft_2d},
Chris@19 195 analogous to @code{fftw_plan_dft_2d}, we pass an additional argument:
Chris@19 196 the communicator, indicating which processes will participate in the
Chris@19 197 transform (here @code{MPI_COMM_WORLD}, indicating all processes).
Chris@19 198 Whenever you create, execute, or destroy a plan for an MPI transform,
Chris@19 199 you must call the corresponding FFTW routine on @emph{all} processes
Chris@19 200 in the communicator for that transform. (That is, these are
Chris@19 201 @emph{collective} calls.) Note that the plan for the MPI transform
Chris@19 202 uses the standard @code{fftw_execute} and @code{fftw_destroy} routines
Chris@19 203 (on the other hand, there are MPI-specific new-array execute functions
Chris@19 204 documented below).
Chris@19 205 @cindex collective function
Chris@19 206 @findex fftw_mpi_plan_dft_2d
Chris@19 207 @ctindex MPI_COMM_WORLD
Chris@19 208
Chris@19 209
Chris@19 210 Third, all of the FFTW MPI routines take @code{ptrdiff_t} arguments
Chris@19 211 instead of @code{int} as for the serial FFTW. @code{ptrdiff_t} is a
Chris@19 212 standard C integer type which is (at least) 32 bits wide on a 32-bit
Chris@19 213 machine and 64 bits wide on a 64-bit machine. This is to make it easy
Chris@19 214 to specify very large parallel transforms on a 64-bit machine. (You
Chris@19 215 can specify 64-bit transform sizes in the serial FFTW, too, but only
Chris@19 216 by using the @samp{guru64} planner interface. @xref{64-bit Guru
Chris@19 217 Interface}.)
Chris@19 218 @tindex ptrdiff_t
Chris@19 219 @cindex 64-bit architecture
Chris@19 220
Chris@19 221
Chris@19 222 Fourth, and most importantly, you don't allocate the entire
Chris@19 223 two-dimensional array on each process. Instead, you call
Chris@19 224 @code{fftw_mpi_local_size_2d} to find out what @emph{portion} of the
Chris@19 225 array resides on each processor, and how much space to allocate.
Chris@19 226 Here, the portion of the array on each process is a @code{local_n0} by
Chris@19 227 @code{N1} slice of the total array, starting at index
Chris@19 228 @code{local_0_start}. The total number of @code{fftw_complex} numbers
Chris@19 229 to allocate is given by the @code{alloc_local} return value, which
Chris@19 230 @emph{may} be greater than @code{local_n0 * N1} (in case some
Chris@19 231 intermediate calculations require additional storage). The data
Chris@19 232 distribution in FFTW's MPI interface is described in more detail by
Chris@19 233 the next section.
Chris@19 234 @findex fftw_mpi_local_size_2d
Chris@19 235 @cindex data distribution
Chris@19 236
Chris@19 237
Chris@19 238 Given the portion of the array that resides on the local process, it
Chris@19 239 is straightforward to initialize the data (here to a function
Chris@19 240 @code{myfunction}) and otherwise manipulate it. Of course, at the end
Chris@19 241 of the program you may want to output the data somehow, but
Chris@19 242 synchronizing this output is up to you and is beyond the scope of this
Chris@19 243 manual. (One good way to output a large multi-dimensional distributed
Chris@19 244 array in MPI to a portable binary file is to use the free HDF5
Chris@19 245 library; see the @uref{http://www.hdfgroup.org/, HDF home page}.)
Chris@19 246 @cindex HDF5
Chris@19 247 @cindex MPI I/O
Chris@19 248
Chris@19 249 @c ------------------------------------------------------------
Chris@19 250 @node MPI Data Distribution, Multi-dimensional MPI DFTs of Real Data, 2d MPI example, Distributed-memory FFTW with MPI
Chris@19 251 @section MPI Data Distribution
Chris@19 252 @cindex data distribution
Chris@19 253
Chris@19 254 The most important concept to understand in using FFTW's MPI interface
Chris@19 255 is the data distribution. With a serial or multithreaded FFT, all of
Chris@19 256 the inputs and outputs are stored as a single contiguous chunk of
Chris@19 257 memory. With a distributed-memory FFT, the inputs and outputs are
Chris@19 258 broken into disjoint blocks, one per process.
Chris@19 259
Chris@19 260 In particular, FFTW uses a @emph{1d block distribution} of the data,
Chris@19 261 distributed along the @emph{first dimension}. For example, if you
Chris@19 262 want to perform a @twodims{100,200} complex DFT, distributed over 4
Chris@19 263 processes, each process will get a @twodims{25,200} slice of the data.
Chris@19 264 That is, process 0 will get rows 0 through 24, process 1 will get rows
Chris@19 265 25 through 49, process 2 will get rows 50 through 74, and process 3
Chris@19 266 will get rows 75 through 99. If you take the same array but
Chris@19 267 distribute it over 3 processes, then it is not evenly divisible so the
Chris@19 268 different processes will have unequal chunks. FFTW's default choice
Chris@19 269 in this case is to assign 34 rows to processes 0 and 1, and 32 rows to
Chris@19 270 process 2.
Chris@19 271 @cindex block distribution
Chris@19 272
Chris@19 273
Chris@19 274 FFTW provides several @samp{fftw_mpi_local_size} routines that you can
Chris@19 275 call to find out what portion of an array is stored on the current
Chris@19 276 process. In most cases, you should use the default block sizes picked
Chris@19 277 by FFTW, but it is also possible to specify your own block size. For
Chris@19 278 example, with a @twodims{100,200} array on three processes, you can
Chris@19 279 tell FFTW to use a block size of 40, which would assign 40 rows to
Chris@19 280 processes 0 and 1, and 20 rows to process 2. FFTW's default is to
Chris@19 281 divide the data equally among the processes if possible, and as best
Chris@19 282 it can otherwise. The rows are always assigned in ``rank order,''
Chris@19 283 i.e. process 0 gets the first block of rows, then process 1, and so
Chris@19 284 on. (You can change this by using @code{MPI_Comm_split} to create a
Chris@19 285 new communicator with re-ordered processes.) However, you should
Chris@19 286 always call the @samp{fftw_mpi_local_size} routines, if possible,
Chris@19 287 rather than trying to predict FFTW's distribution choices.
Chris@19 288
Chris@19 289 In particular, it is critical that you allocate the storage size that
Chris@19 290 is returned by @samp{fftw_mpi_local_size}, which is @emph{not}
Chris@19 291 necessarily the size of the local slice of the array. The reason is
Chris@19 292 that intermediate steps of FFTW's algorithms involve transposing the
Chris@19 293 array and redistributing the data, so at these intermediate steps FFTW
Chris@19 294 may require more local storage space (albeit always proportional to
Chris@19 295 the total size divided by the number of processes). The
Chris@19 296 @samp{fftw_mpi_local_size} functions know how much storage is required
Chris@19 297 for these intermediate steps and tell you the correct amount to
Chris@19 298 allocate.
Chris@19 299
Chris@19 300 @menu
Chris@19 301 * Basic and advanced distribution interfaces::
Chris@19 302 * Load balancing::
Chris@19 303 * Transposed distributions::
Chris@19 304 * One-dimensional distributions::
Chris@19 305 @end menu
Chris@19 306
Chris@19 307 @node Basic and advanced distribution interfaces, Load balancing, MPI Data Distribution, MPI Data Distribution
Chris@19 308 @subsection Basic and advanced distribution interfaces
Chris@19 309
Chris@19 310 As with the planner interface, the @samp{fftw_mpi_local_size}
Chris@19 311 distribution interface is broken into basic and advanced
Chris@19 312 (@samp{_many}) interfaces, where the latter allows you to specify the
Chris@19 313 block size manually and also to request block sizes when computing
Chris@19 314 multiple transforms simultaneously. These functions are documented
Chris@19 315 more exhaustively by the FFTW MPI Reference, but we summarize the
Chris@19 316 basic ideas here using a couple of two-dimensional examples.
Chris@19 317
Chris@19 318 For the @twodims{100,200} complex-DFT example, above, we would find
Chris@19 319 the distribution by calling the following function in the basic
Chris@19 320 interface:
Chris@19 321
Chris@19 322 @example
Chris@19 323 ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@19 324 ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@19 325 @end example
Chris@19 326 @findex fftw_mpi_local_size_2d
Chris@19 327
Chris@19 328 Given the total size of the data to be transformed (here, @code{n0 =
Chris@19 329 100} and @code{n1 = 200}) and an MPI communicator (@code{comm}), this
Chris@19 330 function provides three numbers.
Chris@19 331
Chris@19 332 First, it describes the shape of the local data: the current process
Chris@19 333 should store a @code{local_n0} by @code{n1} slice of the overall
Chris@19 334 dataset, in row-major order (@code{n1} dimension contiguous), starting
Chris@19 335 at index @code{local_0_start}. That is, if the total dataset is
Chris@19 336 viewed as a @code{n0} by @code{n1} matrix, the current process should
Chris@19 337 store the rows @code{local_0_start} to
Chris@19 338 @code{local_0_start+local_n0-1}. Obviously, if you are running with
Chris@19 339 only a single MPI process, that process will store the entire array:
Chris@19 340 @code{local_0_start} will be zero and @code{local_n0} will be
Chris@19 341 @code{n0}. @xref{Row-major Format}.
Chris@19 342 @cindex row-major
Chris@19 343
Chris@19 344
Chris@19 345 Second, the return value is the total number of data elements (e.g.,
Chris@19 346 complex numbers for a complex DFT) that should be allocated for the
Chris@19 347 input and output arrays on the current process (ideally with
Chris@19 348 @code{fftw_malloc} or an @samp{fftw_alloc} function, to ensure optimal
Chris@19 349 alignment). It might seem that this should always be equal to
Chris@19 350 @code{local_n0 * n1}, but this is @emph{not} the case. FFTW's
Chris@19 351 distributed FFT algorithms require data redistributions at
Chris@19 352 intermediate stages of the transform, and in some circumstances this
Chris@19 353 may require slightly larger local storage. This is discussed in more
Chris@19 354 detail below, under @ref{Load balancing}.
Chris@19 355 @findex fftw_malloc
Chris@19 356 @findex fftw_alloc_complex
Chris@19 357
Chris@19 358
Chris@19 359 @cindex advanced interface
Chris@19 360 The advanced-interface @samp{local_size} function for multidimensional
Chris@19 361 transforms returns the same three things (@code{local_n0},
Chris@19 362 @code{local_0_start}, and the total number of elements to allocate),
Chris@19 363 but takes more inputs:
Chris@19 364
Chris@19 365 @example
Chris@19 366 ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n,
Chris@19 367 ptrdiff_t howmany,
Chris@19 368 ptrdiff_t block0,
Chris@19 369 MPI_Comm comm,
Chris@19 370 ptrdiff_t *local_n0,
Chris@19 371 ptrdiff_t *local_0_start);
Chris@19 372 @end example
Chris@19 373 @findex fftw_mpi_local_size_many
Chris@19 374
Chris@19 375 The two-dimensional case above corresponds to @code{rnk = 2} and an
Chris@19 376 array @code{n} of length 2 with @code{n[0] = n0} and @code{n[1] = n1}.
Chris@19 377 This routine is for any @code{rnk > 1}; one-dimensional transforms
Chris@19 378 have their own interface because they work slightly differently, as
Chris@19 379 discussed below.
Chris@19 380
Chris@19 381 First, the advanced interface allows you to perform multiple
Chris@19 382 transforms at once, of interleaved data, as specified by the
Chris@19 383 @code{howmany} parameter. (@code{hoamany} is 1 for a single
Chris@19 384 transform.)
Chris@19 385
Chris@19 386 Second, here you can specify your desired block size in the @code{n0}
Chris@19 387 dimension, @code{block0}. To use FFTW's default block size, pass
Chris@19 388 @code{FFTW_MPI_DEFAULT_BLOCK} (0) for @code{block0}. Otherwise, on
Chris@19 389 @code{P} processes, FFTW will return @code{local_n0} equal to
Chris@19 390 @code{block0} on the first @code{P / block0} processes (rounded down),
Chris@19 391 return @code{local_n0} equal to @code{n0 - block0 * (P / block0)} on
Chris@19 392 the next process, and @code{local_n0} equal to zero on any remaining
Chris@19 393 processes. In general, we recommend using the default block size
Chris@19 394 (which corresponds to @code{n0 / P}, rounded up).
Chris@19 395 @ctindex FFTW_MPI_DEFAULT_BLOCK
Chris@19 396 @cindex block distribution
Chris@19 397
Chris@19 398
Chris@19 399 For example, suppose you have @code{P = 4} processes and @code{n0 =
Chris@19 400 21}. The default will be a block size of @code{6}, which will give
Chris@19 401 @code{local_n0 = 6} on the first three processes and @code{local_n0 =
Chris@19 402 3} on the last process. Instead, however, you could specify
Chris@19 403 @code{block0 = 5} if you wanted, which would give @code{local_n0 = 5}
Chris@19 404 on processes 0 to 2, @code{local_n0 = 6} on process 3. (This choice,
Chris@19 405 while it may look superficially more ``balanced,'' has the same
Chris@19 406 critical path as FFTW's default but requires more communications.)
Chris@19 407
Chris@19 408 @node Load balancing, Transposed distributions, Basic and advanced distribution interfaces, MPI Data Distribution
Chris@19 409 @subsection Load balancing
Chris@19 410 @cindex load balancing
Chris@19 411
Chris@19 412 Ideally, when you parallelize a transform over some @math{P}
Chris@19 413 processes, each process should end up with work that takes equal time.
Chris@19 414 Otherwise, all of the processes end up waiting on whichever process is
Chris@19 415 slowest. This goal is known as ``load balancing.'' In this section,
Chris@19 416 we describe the circumstances under which FFTW is able to load-balance
Chris@19 417 well, and in particular how you should choose your transform size in
Chris@19 418 order to load balance.
Chris@19 419
Chris@19 420 Load balancing is especially difficult when you are parallelizing over
Chris@19 421 heterogeneous machines; for example, if one of your processors is a
Chris@19 422 old 486 and another is a Pentium IV, obviously you should give the
Chris@19 423 Pentium more work to do than the 486 since the latter is much slower.
Chris@19 424 FFTW does not deal with this problem, however---it assumes that your
Chris@19 425 processes run on hardware of comparable speed, and that the goal is
Chris@19 426 therefore to divide the problem as equally as possible.
Chris@19 427
Chris@19 428 For a multi-dimensional complex DFT, FFTW can divide the problem
Chris@19 429 equally among the processes if: (i) the @emph{first} dimension
Chris@19 430 @code{n0} is divisible by @math{P}; and (ii), the @emph{product} of
Chris@19 431 the subsequent dimensions is divisible by @math{P}. (For the advanced
Chris@19 432 interface, where you can specify multiple simultaneous transforms via
Chris@19 433 some ``vector'' length @code{howmany}, a factor of @code{howmany} is
Chris@19 434 included in the product of the subsequent dimensions.)
Chris@19 435
Chris@19 436 For a one-dimensional complex DFT, the length @code{N} of the data
Chris@19 437 should be divisible by @math{P} @emph{squared} to be able to divide
Chris@19 438 the problem equally among the processes.
Chris@19 439
Chris@19 440 @node Transposed distributions, One-dimensional distributions, Load balancing, MPI Data Distribution
Chris@19 441 @subsection Transposed distributions
Chris@19 442
Chris@19 443 Internally, FFTW's MPI transform algorithms work by first computing
Chris@19 444 transforms of the data local to each process, then by globally
Chris@19 445 @emph{transposing} the data in some fashion to redistribute the data
Chris@19 446 among the processes, transforming the new data local to each process,
Chris@19 447 and transposing back. For example, a two-dimensional @code{n0} by
Chris@19 448 @code{n1} array, distributed across the @code{n0} dimension, is
Chris@19 449 transformd by: (i) transforming the @code{n1} dimension, which are
Chris@19 450 local to each process; (ii) transposing to an @code{n1} by @code{n0}
Chris@19 451 array, distributed across the @code{n1} dimension; (iii) transforming
Chris@19 452 the @code{n0} dimension, which is now local to each process; (iv)
Chris@19 453 transposing back.
Chris@19 454 @cindex transpose
Chris@19 455
Chris@19 456
Chris@19 457 However, in many applications it is acceptable to compute a
Chris@19 458 multidimensional DFT whose results are produced in transposed order
Chris@19 459 (e.g., @code{n1} by @code{n0} in two dimensions). This provides a
Chris@19 460 significant performance advantage, because it means that the final
Chris@19 461 transposition step can be omitted. FFTW supports this optimization,
Chris@19 462 which you specify by passing the flag @code{FFTW_MPI_TRANSPOSED_OUT}
Chris@19 463 to the planner routines. To compute the inverse transform of
Chris@19 464 transposed output, you specify @code{FFTW_MPI_TRANSPOSED_IN} to tell
Chris@19 465 it that the input is transposed. In this section, we explain how to
Chris@19 466 interpret the output format of such a transform.
Chris@19 467 @ctindex FFTW_MPI_TRANSPOSED_OUT
Chris@19 468 @ctindex FFTW_MPI_TRANSPOSED_IN
Chris@19 469
Chris@19 470
Chris@19 471 Suppose you have are transforming multi-dimensional data with (at
Chris@19 472 least two) dimensions @ndims{}. As always, it is distributed along
Chris@19 473 the first dimension @dimk{0}. Now, if we compute its DFT with the
Chris@19 474 @code{FFTW_MPI_TRANSPOSED_OUT} flag, the resulting output data are stored
Chris@19 475 with the first @emph{two} dimensions transposed: @ndimstrans{},
Chris@19 476 distributed along the @dimk{1} dimension. Conversely, if we take the
Chris@19 477 @ndimstrans{} data and transform it with the
Chris@19 478 @code{FFTW_MPI_TRANSPOSED_IN} flag, then the format goes back to the
Chris@19 479 original @ndims{} array.
Chris@19 480
Chris@19 481 There are two ways to find the portion of the transposed array that
Chris@19 482 resides on the current process. First, you can simply call the
Chris@19 483 appropriate @samp{local_size} function, passing @ndimstrans{} (the
Chris@19 484 transposed dimensions). This would mean calling the @samp{local_size}
Chris@19 485 function twice, once for the transposed and once for the
Chris@19 486 non-transposed dimensions. Alternatively, you can call one of the
Chris@19 487 @samp{local_size_transposed} functions, which returns both the
Chris@19 488 non-transposed and transposed data distribution from a single call.
Chris@19 489 For example, for a 3d transform with transposed output (or input), you
Chris@19 490 might call:
Chris@19 491
Chris@19 492 @example
Chris@19 493 ptrdiff_t fftw_mpi_local_size_3d_transposed(
Chris@19 494 ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, MPI_Comm comm,
Chris@19 495 ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@19 496 ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@19 497 @end example
Chris@19 498 @findex fftw_mpi_local_size_3d_transposed
Chris@19 499
Chris@19 500 Here, @code{local_n0} and @code{local_0_start} give the size and
Chris@19 501 starting index of the @code{n0} dimension for the
Chris@19 502 @emph{non}-transposed data, as in the previous sections. For
Chris@19 503 @emph{transposed} data (e.g. the output for
Chris@19 504 @code{FFTW_MPI_TRANSPOSED_OUT}), @code{local_n1} and
Chris@19 505 @code{local_1_start} give the size and starting index of the @code{n1}
Chris@19 506 dimension, which is the first dimension of the transposed data
Chris@19 507 (@code{n1} by @code{n0} by @code{n2}).
Chris@19 508
Chris@19 509 (Note that @code{FFTW_MPI_TRANSPOSED_IN} is completely equivalent to
Chris@19 510 performing @code{FFTW_MPI_TRANSPOSED_OUT} and passing the first two
Chris@19 511 dimensions to the planner in reverse order, or vice versa. If you
Chris@19 512 pass @emph{both} the @code{FFTW_MPI_TRANSPOSED_IN} and
Chris@19 513 @code{FFTW_MPI_TRANSPOSED_OUT} flags, it is equivalent to swapping the
Chris@19 514 first two dimensions passed to the planner and passing @emph{neither}
Chris@19 515 flag.)
Chris@19 516
Chris@19 517 @node One-dimensional distributions, , Transposed distributions, MPI Data Distribution
Chris@19 518 @subsection One-dimensional distributions
Chris@19 519
Chris@19 520 For one-dimensional distributed DFTs using FFTW, matters are slightly
Chris@19 521 more complicated because the data distribution is more closely tied to
Chris@19 522 how the algorithm works. In particular, you can no longer pass an
Chris@19 523 arbitrary block size and must accept FFTW's default; also, the block
Chris@19 524 sizes may be different for input and output. Also, the data
Chris@19 525 distribution depends on the flags and transform direction, in order
Chris@19 526 for forward and backward transforms to work correctly.
Chris@19 527
Chris@19 528 @example
Chris@19 529 ptrdiff_t fftw_mpi_local_size_1d(ptrdiff_t n0, MPI_Comm comm,
Chris@19 530 int sign, unsigned flags,
Chris@19 531 ptrdiff_t *local_ni, ptrdiff_t *local_i_start,
Chris@19 532 ptrdiff_t *local_no, ptrdiff_t *local_o_start);
Chris@19 533 @end example
Chris@19 534 @findex fftw_mpi_local_size_1d
Chris@19 535
Chris@19 536 This function computes the data distribution for a 1d transform of
Chris@19 537 size @code{n0} with the given transform @code{sign} and @code{flags}.
Chris@19 538 Both input and output data use block distributions. The input on the
Chris@19 539 current process will consist of @code{local_ni} numbers starting at
Chris@19 540 index @code{local_i_start}; e.g. if only a single process is used,
Chris@19 541 then @code{local_ni} will be @code{n0} and @code{local_i_start} will
Chris@19 542 be @code{0}. Similarly for the output, with @code{local_no} numbers
Chris@19 543 starting at index @code{local_o_start}. The return value of
Chris@19 544 @code{fftw_mpi_local_size_1d} will be the total number of elements to
Chris@19 545 allocate on the current process (which might be slightly larger than
Chris@19 546 the local size due to intermediate steps in the algorithm).
Chris@19 547
Chris@19 548 As mentioned above (@pxref{Load balancing}), the data will be divided
Chris@19 549 equally among the processes if @code{n0} is divisible by the
Chris@19 550 @emph{square} of the number of processes. In this case,
Chris@19 551 @code{local_ni} will equal @code{local_no}. Otherwise, they may be
Chris@19 552 different.
Chris@19 553
Chris@19 554 For some applications, such as convolutions, the order of the output
Chris@19 555 data is irrelevant. In this case, performance can be improved by
Chris@19 556 specifying that the output data be stored in an FFTW-defined
Chris@19 557 ``scrambled'' format. (In particular, this is the analogue of
Chris@19 558 transposed output in the multidimensional case: scrambled output saves
Chris@19 559 a communications step.) If you pass @code{FFTW_MPI_SCRAMBLED_OUT} in
Chris@19 560 the flags, then the output is stored in this (undocumented) scrambled
Chris@19 561 order. Conversely, to perform the inverse transform of data in
Chris@19 562 scrambled order, pass the @code{FFTW_MPI_SCRAMBLED_IN} flag.
Chris@19 563 @ctindex FFTW_MPI_SCRAMBLED_OUT
Chris@19 564 @ctindex FFTW_MPI_SCRAMBLED_IN
Chris@19 565
Chris@19 566
Chris@19 567 In MPI FFTW, only composite sizes @code{n0} can be parallelized; we
Chris@19 568 have not yet implemented a parallel algorithm for large prime sizes.
Chris@19 569
Chris@19 570 @c ------------------------------------------------------------
Chris@19 571 @node Multi-dimensional MPI DFTs of Real Data, Other Multi-dimensional Real-data MPI Transforms, MPI Data Distribution, Distributed-memory FFTW with MPI
Chris@19 572 @section Multi-dimensional MPI DFTs of Real Data
Chris@19 573
Chris@19 574 FFTW's MPI interface also supports multi-dimensional DFTs of real
Chris@19 575 data, similar to the serial r2c and c2r interfaces. (Parallel
Chris@19 576 one-dimensional real-data DFTs are not currently supported; you must
Chris@19 577 use a complex transform and set the imaginary parts of the inputs to
Chris@19 578 zero.)
Chris@19 579
Chris@19 580 The key points to understand for r2c and c2r MPI transforms (compared
Chris@19 581 to the MPI complex DFTs or the serial r2c/c2r transforms), are:
Chris@19 582
Chris@19 583 @itemize @bullet
Chris@19 584
Chris@19 585 @item
Chris@19 586 Just as for serial transforms, r2c/c2r DFTs transform @ndims{} real
Chris@19 587 data to/from @ndimshalf{} complex data: the last dimension of the
Chris@19 588 complex data is cut in half (rounded down), plus one. As for the
Chris@19 589 serial transforms, the sizes you pass to the @samp{plan_dft_r2c} and
Chris@19 590 @samp{plan_dft_c2r} are the @ndims{} dimensions of the real data.
Chris@19 591
Chris@19 592 @item
Chris@19 593 @cindex padding
Chris@19 594 Although the real data is @emph{conceptually} @ndims{}, it is
Chris@19 595 @emph{physically} stored as an @ndimspad{} array, where the last
Chris@19 596 dimension has been @emph{padded} to make it the same size as the
Chris@19 597 complex output. This is much like the in-place serial r2c/c2r
Chris@19 598 interface (@pxref{Multi-Dimensional DFTs of Real Data}), except that
Chris@19 599 in MPI the padding is required even for out-of-place data. The extra
Chris@19 600 padding numbers are ignored by FFTW (they are @emph{not} like
Chris@19 601 zero-padding the transform to a larger size); they are only used to
Chris@19 602 determine the data layout.
Chris@19 603
Chris@19 604 @item
Chris@19 605 @cindex data distribution
Chris@19 606 The data distribution in MPI for @emph{both} the real and complex data
Chris@19 607 is determined by the shape of the @emph{complex} data. That is, you
Chris@19 608 call the appropriate @samp{local size} function for the @ndimshalf{}
Chris@19 609 complex data, and then use the @emph{same} distribution for the real
Chris@19 610 data except that the last complex dimension is replaced by a (padded)
Chris@19 611 real dimension of twice the length.
Chris@19 612
Chris@19 613 @end itemize
Chris@19 614
Chris@19 615 For example suppose we are performing an out-of-place r2c transform of
Chris@19 616 @threedims{L,M,N} real data [padded to @threedims{L,M,2(N/2+1)}],
Chris@19 617 resulting in @threedims{L,M,N/2+1} complex data. Similar to the
Chris@19 618 example in @ref{2d MPI example}, we might do something like:
Chris@19 619
Chris@19 620 @example
Chris@19 621 #include <fftw3-mpi.h>
Chris@19 622
Chris@19 623 int main(int argc, char **argv)
Chris@19 624 @{
Chris@19 625 const ptrdiff_t L = ..., M = ..., N = ...;
Chris@19 626 fftw_plan plan;
Chris@19 627 double *rin;
Chris@19 628 fftw_complex *cout;
Chris@19 629 ptrdiff_t alloc_local, local_n0, local_0_start, i, j, k;
Chris@19 630
Chris@19 631 MPI_Init(&argc, &argv);
Chris@19 632 fftw_mpi_init();
Chris@19 633
Chris@19 634 /* @r{get local data size and allocate} */
Chris@19 635 alloc_local = fftw_mpi_local_size_3d(L, M, N/2+1, MPI_COMM_WORLD,
Chris@19 636 &local_n0, &local_0_start);
Chris@19 637 rin = fftw_alloc_real(2 * alloc_local);
Chris@19 638 cout = fftw_alloc_complex(alloc_local);
Chris@19 639
Chris@19 640 /* @r{create plan for out-of-place r2c DFT} */
Chris@19 641 plan = fftw_mpi_plan_dft_r2c_3d(L, M, N, rin, cout, MPI_COMM_WORLD,
Chris@19 642 FFTW_MEASURE);
Chris@19 643
Chris@19 644 /* @r{initialize rin to some function} my_func(x,y,z) */
Chris@19 645 for (i = 0; i < local_n0; ++i)
Chris@19 646 for (j = 0; j < M; ++j)
Chris@19 647 for (k = 0; k < N; ++k)
Chris@19 648 rin[(i*M + j) * (2*(N/2+1)) + k] = my_func(local_0_start+i, j, k);
Chris@19 649
Chris@19 650 /* @r{compute transforms as many times as desired} */
Chris@19 651 fftw_execute(plan);
Chris@19 652
Chris@19 653 fftw_destroy_plan(plan);
Chris@19 654
Chris@19 655 MPI_Finalize();
Chris@19 656 @}
Chris@19 657 @end example
Chris@19 658
Chris@19 659 @findex fftw_alloc_real
Chris@19 660 @cindex row-major
Chris@19 661 Note that we allocated @code{rin} using @code{fftw_alloc_real} with an
Chris@19 662 argument of @code{2 * alloc_local}: since @code{alloc_local} is the
Chris@19 663 number of @emph{complex} values to allocate, the number of @emph{real}
Chris@19 664 values is twice as many. The @code{rin} array is then
Chris@19 665 @threedims{local_n0,M,2(N/2+1)} in row-major order, so its
Chris@19 666 @code{(i,j,k)} element is at the index @code{(i*M + j) * (2*(N/2+1)) +
Chris@19 667 k} (@pxref{Multi-dimensional Array Format }).
Chris@19 668
Chris@19 669 @cindex transpose
Chris@19 670 @ctindex FFTW_TRANSPOSED_OUT
Chris@19 671 @ctindex FFTW_TRANSPOSED_IN
Chris@19 672 As for the complex transforms, improved performance can be obtained by
Chris@19 673 specifying that the output is the transpose of the input or vice versa
Chris@19 674 (@pxref{Transposed distributions}). In our @threedims{L,M,N} r2c
Chris@19 675 example, including @code{FFTW_TRANSPOSED_OUT} in the flags means that
Chris@19 676 the input would be a padded @threedims{L,M,2(N/2+1)} real array
Chris@19 677 distributed over the @code{L} dimension, while the output would be a
Chris@19 678 @threedims{M,L,N/2+1} complex array distributed over the @code{M}
Chris@19 679 dimension. To perform the inverse c2r transform with the same data
Chris@19 680 distributions, you would use the @code{FFTW_TRANSPOSED_IN} flag.
Chris@19 681
Chris@19 682 @c ------------------------------------------------------------
Chris@19 683 @node Other Multi-dimensional Real-data MPI Transforms, FFTW MPI Transposes, Multi-dimensional MPI DFTs of Real Data, Distributed-memory FFTW with MPI
Chris@19 684 @section Other multi-dimensional Real-Data MPI Transforms
Chris@19 685
Chris@19 686 @cindex r2r
Chris@19 687 FFTW's MPI interface also supports multi-dimensional @samp{r2r}
Chris@19 688 transforms of all kinds supported by the serial interface
Chris@19 689 (e.g. discrete cosine and sine transforms, discrete Hartley
Chris@19 690 transforms, etc.). Only multi-dimensional @samp{r2r} transforms, not
Chris@19 691 one-dimensional transforms, are currently parallelized.
Chris@19 692
Chris@19 693 @tindex fftw_r2r_kind
Chris@19 694 These are used much like the multidimensional complex DFTs discussed
Chris@19 695 above, except that the data is real rather than complex, and one needs
Chris@19 696 to pass an r2r transform kind (@code{fftw_r2r_kind}) for each
Chris@19 697 dimension as in the serial FFTW (@pxref{More DFTs of Real Data}).
Chris@19 698
Chris@19 699 For example, one might perform a two-dimensional @twodims{L,M} that is
Chris@19 700 an REDFT10 (DCT-II) in the first dimension and an RODFT10 (DST-II) in
Chris@19 701 the second dimension with code like:
Chris@19 702
Chris@19 703 @example
Chris@19 704 const ptrdiff_t L = ..., M = ...;
Chris@19 705 fftw_plan plan;
Chris@19 706 double *data;
Chris@19 707 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
Chris@19 708
Chris@19 709 /* @r{get local data size and allocate} */
Chris@19 710 alloc_local = fftw_mpi_local_size_2d(L, M, MPI_COMM_WORLD,
Chris@19 711 &local_n0, &local_0_start);
Chris@19 712 data = fftw_alloc_real(alloc_local);
Chris@19 713
Chris@19 714 /* @r{create plan for in-place REDFT10 x RODFT10} */
Chris@19 715 plan = fftw_mpi_plan_r2r_2d(L, M, data, data, MPI_COMM_WORLD,
Chris@19 716 FFTW_REDFT10, FFTW_RODFT10, FFTW_MEASURE);
Chris@19 717
Chris@19 718 /* @r{initialize data to some function} my_function(x,y) */
Chris@19 719 for (i = 0; i < local_n0; ++i) for (j = 0; j < M; ++j)
Chris@19 720 data[i*M + j] = my_function(local_0_start + i, j);
Chris@19 721
Chris@19 722 /* @r{compute transforms, in-place, as many times as desired} */
Chris@19 723 fftw_execute(plan);
Chris@19 724
Chris@19 725 fftw_destroy_plan(plan);
Chris@19 726 @end example
Chris@19 727
Chris@19 728 @findex fftw_alloc_real
Chris@19 729 Notice that we use the same @samp{local_size} functions as we did for
Chris@19 730 complex data, only now we interpret the sizes in terms of real rather
Chris@19 731 than complex values, and correspondingly use @code{fftw_alloc_real}.
Chris@19 732
Chris@19 733 @c ------------------------------------------------------------
Chris@19 734 @node FFTW MPI Transposes, FFTW MPI Wisdom, Other Multi-dimensional Real-data MPI Transforms, Distributed-memory FFTW with MPI
Chris@19 735 @section FFTW MPI Transposes
Chris@19 736 @cindex transpose
Chris@19 737
Chris@19 738 The FFTW's MPI Fourier transforms rely on one or more @emph{global
Chris@19 739 transposition} step for their communications. For example, the
Chris@19 740 multidimensional transforms work by transforming along some
Chris@19 741 dimensions, then transposing to make the first dimension local and
Chris@19 742 transforming that, then transposing back. Because global
Chris@19 743 transposition of a block-distributed matrix has many other potential
Chris@19 744 uses besides FFTs, FFTW's transpose routines can be called directly,
Chris@19 745 as documented in this section.
Chris@19 746
Chris@19 747 @menu
Chris@19 748 * Basic distributed-transpose interface::
Chris@19 749 * Advanced distributed-transpose interface::
Chris@19 750 * An improved replacement for MPI_Alltoall::
Chris@19 751 @end menu
Chris@19 752
Chris@19 753 @node Basic distributed-transpose interface, Advanced distributed-transpose interface, FFTW MPI Transposes, FFTW MPI Transposes
Chris@19 754 @subsection Basic distributed-transpose interface
Chris@19 755
Chris@19 756 In particular, suppose that we have an @code{n0} by @code{n1} array in
Chris@19 757 row-major order, block-distributed across the @code{n0} dimension. To
Chris@19 758 transpose this into an @code{n1} by @code{n0} array block-distributed
Chris@19 759 across the @code{n1} dimension, we would create a plan by calling the
Chris@19 760 following function:
Chris@19 761
Chris@19 762 @example
Chris@19 763 fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 764 double *in, double *out,
Chris@19 765 MPI_Comm comm, unsigned flags);
Chris@19 766 @end example
Chris@19 767 @findex fftw_mpi_plan_transpose
Chris@19 768
Chris@19 769 The input and output arrays (@code{in} and @code{out}) can be the
Chris@19 770 same. The transpose is actually executed by calling
Chris@19 771 @code{fftw_execute} on the plan, as usual.
Chris@19 772 @findex fftw_execute
Chris@19 773
Chris@19 774
Chris@19 775 The @code{flags} are the usual FFTW planner flags, but support
Chris@19 776 two additional flags: @code{FFTW_MPI_TRANSPOSED_OUT} and/or
Chris@19 777 @code{FFTW_MPI_TRANSPOSED_IN}. What these flags indicate, for
Chris@19 778 transpose plans, is that the output and/or input, respectively, are
Chris@19 779 @emph{locally} transposed. That is, on each process input data is
Chris@19 780 normally stored as a @code{local_n0} by @code{n1} array in row-major
Chris@19 781 order, but for an @code{FFTW_MPI_TRANSPOSED_IN} plan the input data is
Chris@19 782 stored as @code{n1} by @code{local_n0} in row-major order. Similarly,
Chris@19 783 @code{FFTW_MPI_TRANSPOSED_OUT} means that the output is @code{n0} by
Chris@19 784 @code{local_n1} instead of @code{local_n1} by @code{n0}.
Chris@19 785 @ctindex FFTW_MPI_TRANSPOSED_OUT
Chris@19 786 @ctindex FFTW_MPI_TRANSPOSED_IN
Chris@19 787
Chris@19 788
Chris@19 789 To determine the local size of the array on each process before and
Chris@19 790 after the transpose, as well as the amount of storage that must be
Chris@19 791 allocated, one should call @code{fftw_mpi_local_size_2d_transposed},
Chris@19 792 just as for a 2d DFT as described in the previous section:
Chris@19 793 @cindex data distribution
Chris@19 794
Chris@19 795 @example
Chris@19 796 ptrdiff_t fftw_mpi_local_size_2d_transposed
Chris@19 797 (ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@19 798 ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@19 799 ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@19 800 @end example
Chris@19 801 @findex fftw_mpi_local_size_2d_transposed
Chris@19 802
Chris@19 803 Again, the return value is the local storage to allocate, which in
Chris@19 804 this case is the number of @emph{real} (@code{double}) values rather
Chris@19 805 than complex numbers as in the previous examples.
Chris@19 806
Chris@19 807 @node Advanced distributed-transpose interface, An improved replacement for MPI_Alltoall, Basic distributed-transpose interface, FFTW MPI Transposes
Chris@19 808 @subsection Advanced distributed-transpose interface
Chris@19 809
Chris@19 810 The above routines are for a transpose of a matrix of numbers (of type
Chris@19 811 @code{double}), using FFTW's default block sizes. More generally, one
Chris@19 812 can perform transposes of @emph{tuples} of numbers, with
Chris@19 813 user-specified block sizes for the input and output:
Chris@19 814
Chris@19 815 @example
Chris@19 816 fftw_plan fftw_mpi_plan_many_transpose
Chris@19 817 (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany,
Chris@19 818 ptrdiff_t block0, ptrdiff_t block1,
Chris@19 819 double *in, double *out, MPI_Comm comm, unsigned flags);
Chris@19 820 @end example
Chris@19 821 @findex fftw_mpi_plan_many_transpose
Chris@19 822
Chris@19 823 In this case, one is transposing an @code{n0} by @code{n1} matrix of
Chris@19 824 @code{howmany}-tuples (e.g. @code{howmany = 2} for complex numbers).
Chris@19 825 The input is distributed along the @code{n0} dimension with block size
Chris@19 826 @code{block0}, and the @code{n1} by @code{n0} output is distributed
Chris@19 827 along the @code{n1} dimension with block size @code{block1}. If
Chris@19 828 @code{FFTW_MPI_DEFAULT_BLOCK} (0) is passed for a block size then FFTW
Chris@19 829 uses its default block size. To get the local size of the data on
Chris@19 830 each process, you should then call @code{fftw_mpi_local_size_many_transposed}.
Chris@19 831 @ctindex FFTW_MPI_DEFAULT_BLOCK
Chris@19 832 @findex fftw_mpi_local_size_many_transposed
Chris@19 833
Chris@19 834 @node An improved replacement for MPI_Alltoall, , Advanced distributed-transpose interface, FFTW MPI Transposes
Chris@19 835 @subsection An improved replacement for MPI_Alltoall
Chris@19 836
Chris@19 837 We close this section by noting that FFTW's MPI transpose routines can
Chris@19 838 be thought of as a generalization for the @code{MPI_Alltoall} function
Chris@19 839 (albeit only for floating-point types), and in some circumstances can
Chris@19 840 function as an improved replacement.
Chris@19 841 @findex MPI_Alltoall
Chris@19 842
Chris@19 843
Chris@19 844 @code{MPI_Alltoall} is defined by the MPI standard as:
Chris@19 845
Chris@19 846 @example
Chris@19 847 int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype,
Chris@19 848 void *recvbuf, int recvcnt, MPI_Datatype recvtype,
Chris@19 849 MPI_Comm comm);
Chris@19 850 @end example
Chris@19 851
Chris@19 852 In particular, for @code{double*} arrays @code{in} and @code{out},
Chris@19 853 consider the call:
Chris@19 854
Chris@19 855 @example
Chris@19 856 MPI_Alltoall(in, howmany, MPI_DOUBLE, out, howmany MPI_DOUBLE, comm);
Chris@19 857 @end example
Chris@19 858
Chris@19 859 This is completely equivalent to:
Chris@19 860
Chris@19 861 @example
Chris@19 862 MPI_Comm_size(comm, &P);
Chris@19 863 plan = fftw_mpi_plan_many_transpose(P, P, howmany, 1, 1, in, out, comm, FFTW_ESTIMATE);
Chris@19 864 fftw_execute(plan);
Chris@19 865 fftw_destroy_plan(plan);
Chris@19 866 @end example
Chris@19 867
Chris@19 868 That is, computing a @twodims{P,P} transpose on @code{P} processes,
Chris@19 869 with a block size of 1, is just a standard all-to-all communication.
Chris@19 870
Chris@19 871 However, using the FFTW routine instead of @code{MPI_Alltoall} may
Chris@19 872 have certain advantages. First of all, FFTW's routine can operate
Chris@19 873 in-place (@code{in == out}) whereas @code{MPI_Alltoall} can only
Chris@19 874 operate out-of-place.
Chris@19 875 @cindex in-place
Chris@19 876
Chris@19 877
Chris@19 878 Second, even for out-of-place plans, FFTW's routine may be faster,
Chris@19 879 especially if you need to perform the all-to-all communication many
Chris@19 880 times and can afford to use @code{FFTW_MEASURE} or
Chris@19 881 @code{FFTW_PATIENT}. It should certainly be no slower, not including
Chris@19 882 the time to create the plan, since one of the possible algorithms that
Chris@19 883 FFTW uses for an out-of-place transpose @emph{is} simply to call
Chris@19 884 @code{MPI_Alltoall}. However, FFTW also considers several other
Chris@19 885 possible algorithms that, depending on your MPI implementation and
Chris@19 886 your hardware, may be faster.
Chris@19 887 @ctindex FFTW_MEASURE
Chris@19 888 @ctindex FFTW_PATIENT
Chris@19 889
Chris@19 890 @c ------------------------------------------------------------
Chris@19 891 @node FFTW MPI Wisdom, Avoiding MPI Deadlocks, FFTW MPI Transposes, Distributed-memory FFTW with MPI
Chris@19 892 @section FFTW MPI Wisdom
Chris@19 893 @cindex wisdom
Chris@19 894 @cindex saving plans to disk
Chris@19 895
Chris@19 896 FFTW's ``wisdom'' facility (@pxref{Words of Wisdom-Saving Plans}) can
Chris@19 897 be used to save MPI plans as well as to save uniprocessor plans.
Chris@19 898 However, for MPI there are several unavoidable complications.
Chris@19 899
Chris@19 900 @cindex MPI I/O
Chris@19 901 First, the MPI standard does not guarantee that every process can
Chris@19 902 perform file I/O (at least, not using C stdio routines)---in general,
Chris@19 903 we may only assume that process 0 is capable of I/O.@footnote{In fact,
Chris@19 904 even this assumption is not technically guaranteed by the standard,
Chris@19 905 although it seems to be universal in actual MPI implementations and is
Chris@19 906 widely assumed by MPI-using software. Technically, you need to query
Chris@19 907 the @code{MPI_IO} attribute of @code{MPI_COMM_WORLD} with
Chris@19 908 @code{MPI_Attr_get}. If this attribute is @code{MPI_PROC_NULL}, no
Chris@19 909 I/O is possible. If it is @code{MPI_ANY_SOURCE}, any process can
Chris@19 910 perform I/O. Otherwise, it is the rank of a process that can perform
Chris@19 911 I/O ... but since it is not guaranteed to yield the @emph{same} rank
Chris@19 912 on all processes, you have to do an @code{MPI_Allreduce} of some kind
Chris@19 913 if you want all processes to agree about which is going to do I/O.
Chris@19 914 And even then, the standard only guarantees that this process can
Chris@19 915 perform output, but not input. See e.g. @cite{Parallel Programming
Chris@19 916 with MPI} by P. S. Pacheco, section 8.1.3. Needless to say, in our
Chris@19 917 experience virtually no MPI programmers worry about this.} So, if we
Chris@19 918 want to export the wisdom from a single process to a file, we must
Chris@19 919 first export the wisdom to a string, then send it to process 0, then
Chris@19 920 write it to a file.
Chris@19 921
Chris@19 922 Second, in principle we may want to have separate wisdom for every
Chris@19 923 process, since in general the processes may run on different hardware
Chris@19 924 even for a single MPI program. However, in practice FFTW's MPI code
Chris@19 925 is designed for the case of homogeneous hardware (@pxref{Load
Chris@19 926 balancing}), and in this case it is convenient to use the same wisdom
Chris@19 927 for every process. Thus, we need a mechanism to synchronize the wisdom.
Chris@19 928
Chris@19 929 To address both of these problems, FFTW provides the following two
Chris@19 930 functions:
Chris@19 931
Chris@19 932 @example
Chris@19 933 void fftw_mpi_broadcast_wisdom(MPI_Comm comm);
Chris@19 934 void fftw_mpi_gather_wisdom(MPI_Comm comm);
Chris@19 935 @end example
Chris@19 936 @findex fftw_mpi_gather_wisdom
Chris@19 937 @findex fftw_mpi_broadcast_wisdom
Chris@19 938
Chris@19 939 Given a communicator @code{comm}, @code{fftw_mpi_broadcast_wisdom}
Chris@19 940 will broadcast the wisdom from process 0 to all other processes.
Chris@19 941 Conversely, @code{fftw_mpi_gather_wisdom} will collect wisdom from all
Chris@19 942 processes onto process 0. (If the plans created for the same problem
Chris@19 943 by different processes are not the same, @code{fftw_mpi_gather_wisdom}
Chris@19 944 will arbitrarily choose one of the plans.) Both of these functions
Chris@19 945 may result in suboptimal plans for different processes if the
Chris@19 946 processes are running on non-identical hardware. Both of these
Chris@19 947 functions are @emph{collective} calls, which means that they must be
Chris@19 948 executed by all processes in the communicator.
Chris@19 949 @cindex collective function
Chris@19 950
Chris@19 951
Chris@19 952 So, for example, a typical code snippet to import wisdom from a file
Chris@19 953 and use it on all processes would be:
Chris@19 954
Chris@19 955 @example
Chris@19 956 @{
Chris@19 957 int rank;
Chris@19 958
Chris@19 959 fftw_mpi_init();
Chris@19 960 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
Chris@19 961 if (rank == 0) fftw_import_wisdom_from_filename("mywisdom");
Chris@19 962 fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD);
Chris@19 963 @}
Chris@19 964 @end example
Chris@19 965
Chris@19 966 (Note that we must call @code{fftw_mpi_init} before importing any
Chris@19 967 wisdom that might contain MPI plans.) Similarly, a typical code
Chris@19 968 snippet to export wisdom from all processes to a file is:
Chris@19 969 @findex fftw_mpi_init
Chris@19 970
Chris@19 971 @example
Chris@19 972 @{
Chris@19 973 int rank;
Chris@19 974
Chris@19 975 fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
Chris@19 976 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
Chris@19 977 if (rank == 0) fftw_export_wisdom_to_filename("mywisdom");
Chris@19 978 @}
Chris@19 979 @end example
Chris@19 980
Chris@19 981 @c ------------------------------------------------------------
Chris@19 982 @node Avoiding MPI Deadlocks, FFTW MPI Performance Tips, FFTW MPI Wisdom, Distributed-memory FFTW with MPI
Chris@19 983 @section Avoiding MPI Deadlocks
Chris@19 984 @cindex deadlock
Chris@19 985
Chris@19 986 An MPI program can @emph{deadlock} if one process is waiting for a
Chris@19 987 message from another process that never gets sent. To avoid deadlocks
Chris@19 988 when using FFTW's MPI routines, it is important to know which
Chris@19 989 functions are @emph{collective}: that is, which functions must
Chris@19 990 @emph{always} be called in the @emph{same order} from @emph{every}
Chris@19 991 process in a given communicator. (For example, @code{MPI_Barrier} is
Chris@19 992 the canonical example of a collective function in the MPI standard.)
Chris@19 993 @cindex collective function
Chris@19 994 @findex MPI_Barrier
Chris@19 995
Chris@19 996
Chris@19 997 The functions in FFTW that are @emph{always} collective are: every
Chris@19 998 function beginning with @samp{fftw_mpi_plan}, as well as
Chris@19 999 @code{fftw_mpi_broadcast_wisdom} and @code{fftw_mpi_gather_wisdom}.
Chris@19 1000 Also, the following functions from the ordinary FFTW interface are
Chris@19 1001 collective when they are applied to a plan created by an
Chris@19 1002 @samp{fftw_mpi_plan} function: @code{fftw_execute},
Chris@19 1003 @code{fftw_destroy_plan}, and @code{fftw_flops}.
Chris@19 1004 @findex fftw_execute
Chris@19 1005 @findex fftw_destroy_plan
Chris@19 1006 @findex fftw_flops
Chris@19 1007
Chris@19 1008 @c ------------------------------------------------------------
Chris@19 1009 @node FFTW MPI Performance Tips, Combining MPI and Threads, Avoiding MPI Deadlocks, Distributed-memory FFTW with MPI
Chris@19 1010 @section FFTW MPI Performance Tips
Chris@19 1011
Chris@19 1012 In this section, we collect a few tips on getting the best performance
Chris@19 1013 out of FFTW's MPI transforms.
Chris@19 1014
Chris@19 1015 First, because of the 1d block distribution, FFTW's parallelization is
Chris@19 1016 currently limited by the size of the first dimension.
Chris@19 1017 (Multidimensional block distributions may be supported by a future
Chris@19 1018 version.) More generally, you should ideally arrange the dimensions so
Chris@19 1019 that FFTW can divide them equally among the processes. @xref{Load
Chris@19 1020 balancing}.
Chris@19 1021 @cindex block distribution
Chris@19 1022 @cindex load balancing
Chris@19 1023
Chris@19 1024
Chris@19 1025 Second, if it is not too inconvenient, you should consider working
Chris@19 1026 with transposed output for multidimensional plans, as this saves a
Chris@19 1027 considerable amount of communications. @xref{Transposed distributions}.
Chris@19 1028 @cindex transpose
Chris@19 1029
Chris@19 1030
Chris@19 1031 Third, the fastest choices are generally either an in-place transform
Chris@19 1032 or an out-of-place transform with the @code{FFTW_DESTROY_INPUT} flag
Chris@19 1033 (which allows the input array to be used as scratch space). In-place
Chris@19 1034 is especially beneficial if the amount of data per process is large.
Chris@19 1035 @ctindex FFTW_DESTROY_INPUT
Chris@19 1036
Chris@19 1037
Chris@19 1038 Fourth, if you have multiple arrays to transform at once, rather than
Chris@19 1039 calling FFTW's MPI transforms several times it usually seems to be
Chris@19 1040 faster to interleave the data and use the advanced interface. (This
Chris@19 1041 groups the communications together instead of requiring separate
Chris@19 1042 messages for each transform.)
Chris@19 1043
Chris@19 1044 @c ------------------------------------------------------------
Chris@19 1045 @node Combining MPI and Threads, FFTW MPI Reference, FFTW MPI Performance Tips, Distributed-memory FFTW with MPI
Chris@19 1046 @section Combining MPI and Threads
Chris@19 1047 @cindex threads
Chris@19 1048
Chris@19 1049 In certain cases, it may be advantageous to combine MPI
Chris@19 1050 (distributed-memory) and threads (shared-memory) parallelization.
Chris@19 1051 FFTW supports this, with certain caveats. For example, if you have a
Chris@19 1052 cluster of 4-processor shared-memory nodes, you may want to use
Chris@19 1053 threads within the nodes and MPI between the nodes, instead of MPI for
Chris@19 1054 all parallelization.
Chris@19 1055
Chris@19 1056 In particular, it is possible to seamlessly combine the MPI FFTW
Chris@19 1057 routines with the multi-threaded FFTW routines (@pxref{Multi-threaded
Chris@19 1058 FFTW}). However, some care must be taken in the initialization code,
Chris@19 1059 which should look something like this:
Chris@19 1060
Chris@19 1061 @example
Chris@19 1062 int threads_ok;
Chris@19 1063
Chris@19 1064 int main(int argc, char **argv)
Chris@19 1065 @{
Chris@19 1066 int provided;
Chris@19 1067 MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
Chris@19 1068 threads_ok = provided >= MPI_THREAD_FUNNELED;
Chris@19 1069
Chris@19 1070 if (threads_ok) threads_ok = fftw_init_threads();
Chris@19 1071 fftw_mpi_init();
Chris@19 1072
Chris@19 1073 ...
Chris@19 1074 if (threads_ok) fftw_plan_with_nthreads(...);
Chris@19 1075 ...
Chris@19 1076
Chris@19 1077 MPI_Finalize();
Chris@19 1078 @}
Chris@19 1079 @end example
Chris@19 1080 @findex fftw_mpi_init
Chris@19 1081 @findex fftw_init_threads
Chris@19 1082 @findex fftw_plan_with_nthreads
Chris@19 1083
Chris@19 1084 First, note that instead of calling @code{MPI_Init}, you should call
Chris@19 1085 @code{MPI_Init_threads}, which is the initialization routine defined
Chris@19 1086 by the MPI-2 standard to indicate to MPI that your program will be
Chris@19 1087 multithreaded. We pass @code{MPI_THREAD_FUNNELED}, which indicates
Chris@19 1088 that we will only call MPI routines from the main thread. (FFTW will
Chris@19 1089 launch additional threads internally, but the extra threads will not
Chris@19 1090 call MPI code.) (You may also pass @code{MPI_THREAD_SERIALIZED} or
Chris@19 1091 @code{MPI_THREAD_MULTIPLE}, which requests additional multithreading
Chris@19 1092 support from the MPI implementation, but this is not required by
Chris@19 1093 FFTW.) The @code{provided} parameter returns what level of threads
Chris@19 1094 support is actually supported by your MPI implementation; this
Chris@19 1095 @emph{must} be at least @code{MPI_THREAD_FUNNELED} if you want to call
Chris@19 1096 the FFTW threads routines, so we define a global variable
Chris@19 1097 @code{threads_ok} to record this. You should only call
Chris@19 1098 @code{fftw_init_threads} or @code{fftw_plan_with_nthreads} if
Chris@19 1099 @code{threads_ok} is true. For more information on thread safety in
Chris@19 1100 MPI, see the
Chris@19 1101 @uref{http://www.mpi-forum.org/docs/mpi-20-html/node162.htm, MPI and
Chris@19 1102 Threads} section of the MPI-2 standard.
Chris@19 1103 @cindex thread safety
Chris@19 1104
Chris@19 1105
Chris@19 1106 Second, we must call @code{fftw_init_threads} @emph{before}
Chris@19 1107 @code{fftw_mpi_init}. This is critical for technical reasons having
Chris@19 1108 to do with how FFTW initializes its list of algorithms.
Chris@19 1109
Chris@19 1110 Then, if you call @code{fftw_plan_with_nthreads(N)}, @emph{every} MPI
Chris@19 1111 process will launch (up to) @code{N} threads to parallelize its transforms.
Chris@19 1112
Chris@19 1113 For example, in the hypothetical cluster of 4-processor nodes, you
Chris@19 1114 might wish to launch only a single MPI process per node, and then call
Chris@19 1115 @code{fftw_plan_with_nthreads(4)} on each process to use all
Chris@19 1116 processors in the nodes.
Chris@19 1117
Chris@19 1118 This may or may not be faster than simply using as many MPI processes
Chris@19 1119 as you have processors, however. On the one hand, using threads
Chris@19 1120 within a node eliminates the need for explicit message passing within
Chris@19 1121 the node. On the other hand, FFTW's transpose routines are not
Chris@19 1122 multi-threaded, and this means that the communications that do take
Chris@19 1123 place will not benefit from parallelization within the node.
Chris@19 1124 Moreover, many MPI implementations already have optimizations to
Chris@19 1125 exploit shared memory when it is available, so adding the
Chris@19 1126 multithreaded FFTW on top of this may be superfluous.
Chris@19 1127 @cindex transpose
Chris@19 1128
Chris@19 1129 @c ------------------------------------------------------------
Chris@19 1130 @node FFTW MPI Reference, FFTW MPI Fortran Interface, Combining MPI and Threads, Distributed-memory FFTW with MPI
Chris@19 1131 @section FFTW MPI Reference
Chris@19 1132
Chris@19 1133 This chapter provides a complete reference to all FFTW MPI functions,
Chris@19 1134 datatypes, and constants. See also @ref{FFTW Reference} for information
Chris@19 1135 on functions and types in common with the serial interface.
Chris@19 1136
Chris@19 1137 @menu
Chris@19 1138 * MPI Files and Data Types::
Chris@19 1139 * MPI Initialization::
Chris@19 1140 * Using MPI Plans::
Chris@19 1141 * MPI Data Distribution Functions::
Chris@19 1142 * MPI Plan Creation::
Chris@19 1143 * MPI Wisdom Communication::
Chris@19 1144 @end menu
Chris@19 1145
Chris@19 1146 @node MPI Files and Data Types, MPI Initialization, FFTW MPI Reference, FFTW MPI Reference
Chris@19 1147 @subsection MPI Files and Data Types
Chris@19 1148
Chris@19 1149 All programs using FFTW's MPI support should include its header file:
Chris@19 1150
Chris@19 1151 @example
Chris@19 1152 #include <fftw3-mpi.h>
Chris@19 1153 @end example
Chris@19 1154
Chris@19 1155 Note that this header file includes the serial-FFTW @code{fftw3.h}
Chris@19 1156 header file, and also the @code{mpi.h} header file for MPI, so you
Chris@19 1157 need not include those files separately.
Chris@19 1158
Chris@19 1159 You must also link to @emph{both} the FFTW MPI library and to the
Chris@19 1160 serial FFTW library. On Unix, this means adding @code{-lfftw3_mpi
Chris@19 1161 -lfftw3 -lm} at the end of the link command.
Chris@19 1162
Chris@19 1163 @cindex precision
Chris@19 1164 Different precisions are handled as in the serial interface:
Chris@19 1165 @xref{Precision}. That is, @samp{fftw_} functions become
Chris@19 1166 @code{fftwf_} (in single precision) etcetera, and the libraries become
Chris@19 1167 @code{-lfftw3f_mpi -lfftw3f -lm} etcetera on Unix. Long-double
Chris@19 1168 precision is supported in MPI, but quad precision (@samp{fftwq_}) is
Chris@19 1169 not due to the lack of MPI support for this type.
Chris@19 1170
Chris@19 1171 @node MPI Initialization, Using MPI Plans, MPI Files and Data Types, FFTW MPI Reference
Chris@19 1172 @subsection MPI Initialization
Chris@19 1173
Chris@19 1174 Before calling any other FFTW MPI (@samp{fftw_mpi_}) function, and
Chris@19 1175 before importing any wisdom for MPI problems, you must call:
Chris@19 1176
Chris@19 1177 @findex fftw_mpi_init
Chris@19 1178 @example
Chris@19 1179 void fftw_mpi_init(void);
Chris@19 1180 @end example
Chris@19 1181
Chris@19 1182 @findex fftw_init_threads
Chris@19 1183 If FFTW threads support is used, however, @code{fftw_mpi_init} should
Chris@19 1184 be called @emph{after} @code{fftw_init_threads} (@pxref{Combining MPI
Chris@19 1185 and Threads}). Calling @code{fftw_mpi_init} additional times (before
Chris@19 1186 @code{fftw_mpi_cleanup}) has no effect.
Chris@19 1187
Chris@19 1188
Chris@19 1189 If you want to deallocate all persistent data and reset FFTW to the
Chris@19 1190 pristine state it was in when you started your program, you can call:
Chris@19 1191
Chris@19 1192 @findex fftw_mpi_cleanup
Chris@19 1193 @example
Chris@19 1194 void fftw_mpi_cleanup(void);
Chris@19 1195 @end example
Chris@19 1196
Chris@19 1197 @findex fftw_cleanup
Chris@19 1198 (This calls @code{fftw_cleanup}, so you need not call the serial
Chris@19 1199 cleanup routine too, although it is safe to do so.) After calling
Chris@19 1200 @code{fftw_mpi_cleanup}, all existing plans become undefined, and you
Chris@19 1201 should not attempt to execute or destroy them. You must call
Chris@19 1202 @code{fftw_mpi_init} again after @code{fftw_mpi_cleanup} if you want
Chris@19 1203 to resume using the MPI FFTW routines.
Chris@19 1204
Chris@19 1205 @node Using MPI Plans, MPI Data Distribution Functions, MPI Initialization, FFTW MPI Reference
Chris@19 1206 @subsection Using MPI Plans
Chris@19 1207
Chris@19 1208 Once an MPI plan is created, you can execute and destroy it using
Chris@19 1209 @code{fftw_execute}, @code{fftw_destroy_plan}, and the other functions
Chris@19 1210 in the serial interface that operate on generic plans (@pxref{Using
Chris@19 1211 Plans}).
Chris@19 1212
Chris@19 1213 @cindex collective function
Chris@19 1214 @cindex MPI communicator
Chris@19 1215 The @code{fftw_execute} and @code{fftw_destroy_plan} functions, applied to
Chris@19 1216 MPI plans, are @emph{collective} calls: they must be called for all processes
Chris@19 1217 in the communicator that was used to create the plan.
Chris@19 1218
Chris@19 1219 @cindex new-array execution
Chris@19 1220 You must @emph{not} use the serial new-array plan-execution functions
Chris@19 1221 @code{fftw_execute_dft} and so on (@pxref{New-array Execute
Chris@19 1222 Functions}) with MPI plans. Such functions are specialized to the
Chris@19 1223 problem type, and there are specific new-array execute functions for MPI plans:
Chris@19 1224
Chris@19 1225 @findex fftw_mpi_execute_dft
Chris@19 1226 @findex fftw_mpi_execute_dft_r2c
Chris@19 1227 @findex fftw_mpi_execute_dft_c2r
Chris@19 1228 @findex fftw_mpi_execute_r2r
Chris@19 1229 @example
Chris@19 1230 void fftw_mpi_execute_dft(fftw_plan p, fftw_complex *in, fftw_complex *out);
Chris@19 1231 void fftw_mpi_execute_dft_r2c(fftw_plan p, double *in, fftw_complex *out);
Chris@19 1232 void fftw_mpi_execute_dft_c2r(fftw_plan p, fftw_complex *in, double *out);
Chris@19 1233 void fftw_mpi_execute_r2r(fftw_plan p, double *in, double *out);
Chris@19 1234 @end example
Chris@19 1235
Chris@19 1236 @cindex alignment
Chris@19 1237 @findex fftw_malloc
Chris@19 1238 These functions have the same restrictions as those of the serial
Chris@19 1239 new-array execute functions. They are @emph{always} safe to apply to
Chris@19 1240 the @emph{same} @code{in} and @code{out} arrays that were used to
Chris@19 1241 create the plan. They can only be applied to new arrarys if those
Chris@19 1242 arrays have the same types, dimensions, in-placeness, and alignment as
Chris@19 1243 the original arrays, where the best way to ensure the same alignment
Chris@19 1244 is to use FFTW's @code{fftw_malloc} and related allocation functions
Chris@19 1245 for all arrays (@pxref{Memory Allocation}). Note that distributed
Chris@19 1246 transposes (@pxref{FFTW MPI Transposes}) use
Chris@19 1247 @code{fftw_mpi_execute_r2r}, since they count as rank-zero r2r plans
Chris@19 1248 from FFTW's perspective.
Chris@19 1249
Chris@19 1250 @node MPI Data Distribution Functions, MPI Plan Creation, Using MPI Plans, FFTW MPI Reference
Chris@19 1251 @subsection MPI Data Distribution Functions
Chris@19 1252
Chris@19 1253 @cindex data distribution
Chris@19 1254 As described above (@pxref{MPI Data Distribution}), in order to
Chris@19 1255 allocate your arrays, @emph{before} creating a plan, you must first
Chris@19 1256 call one of the following routines to determine the required
Chris@19 1257 allocation size and the portion of the array locally stored on a given
Chris@19 1258 process. The @code{MPI_Comm} communicator passed here must be
Chris@19 1259 equivalent to the communicator used below for plan creation.
Chris@19 1260
Chris@19 1261 The basic interface for multidimensional transforms consists of the
Chris@19 1262 functions:
Chris@19 1263
Chris@19 1264 @findex fftw_mpi_local_size_2d
Chris@19 1265 @findex fftw_mpi_local_size_3d
Chris@19 1266 @findex fftw_mpi_local_size
Chris@19 1267 @findex fftw_mpi_local_size_2d_transposed
Chris@19 1268 @findex fftw_mpi_local_size_3d_transposed
Chris@19 1269 @findex fftw_mpi_local_size_transposed
Chris@19 1270 @example
Chris@19 1271 ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@19 1272 ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@19 1273 ptrdiff_t fftw_mpi_local_size_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@19 1274 MPI_Comm comm,
Chris@19 1275 ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@19 1276 ptrdiff_t fftw_mpi_local_size(int rnk, const ptrdiff_t *n, MPI_Comm comm,
Chris@19 1277 ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@19 1278
Chris@19 1279 ptrdiff_t fftw_mpi_local_size_2d_transposed(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
Chris@19 1280 ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@19 1281 ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@19 1282 ptrdiff_t fftw_mpi_local_size_3d_transposed(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@19 1283 MPI_Comm comm,
Chris@19 1284 ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@19 1285 ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@19 1286 ptrdiff_t fftw_mpi_local_size_transposed(int rnk, const ptrdiff_t *n, MPI_Comm comm,
Chris@19 1287 ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@19 1288 ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@19 1289 @end example
Chris@19 1290
Chris@19 1291 These functions return the number of elements to allocate (complex
Chris@19 1292 numbers for DFT/r2c/c2r plans, real numbers for r2r plans), whereas
Chris@19 1293 the @code{local_n0} and @code{local_0_start} return the portion
Chris@19 1294 (@code{local_0_start} to @code{local_0_start + local_n0 - 1}) of the
Chris@19 1295 first dimension of an @ndims{} array that is stored on the local
Chris@19 1296 process. @xref{Basic and advanced distribution interfaces}. For
Chris@19 1297 @code{FFTW_MPI_TRANSPOSED_OUT} plans, the @samp{_transposed} variants
Chris@19 1298 are useful in order to also return the local portion of the first
Chris@19 1299 dimension in the @ndimstrans{} transposed output.
Chris@19 1300 @xref{Transposed distributions}.
Chris@19 1301 The advanced interface for multidimensional transforms is:
Chris@19 1302
Chris@19 1303 @cindex advanced interface
Chris@19 1304 @findex fftw_mpi_local_size_many
Chris@19 1305 @findex fftw_mpi_local_size_many_transposed
Chris@19 1306 @example
Chris@19 1307 ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, ptrdiff_t howmany,
Chris@19 1308 ptrdiff_t block0, MPI_Comm comm,
Chris@19 1309 ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Chris@19 1310 ptrdiff_t fftw_mpi_local_size_many_transposed(int rnk, const ptrdiff_t *n, ptrdiff_t howmany,
Chris@19 1311 ptrdiff_t block0, ptrdiff_t block1, MPI_Comm comm,
Chris@19 1312 ptrdiff_t *local_n0, ptrdiff_t *local_0_start,
Chris@19 1313 ptrdiff_t *local_n1, ptrdiff_t *local_1_start);
Chris@19 1314 @end example
Chris@19 1315
Chris@19 1316 These differ from the basic interface in only two ways. First, they
Chris@19 1317 allow you to specify block sizes @code{block0} and @code{block1} (the
Chris@19 1318 latter for the transposed output); you can pass
Chris@19 1319 @code{FFTW_MPI_DEFAULT_BLOCK} to use FFTW's default block size as in
Chris@19 1320 the basic interface. Second, you can pass a @code{howmany} parameter,
Chris@19 1321 corresponding to the advanced planning interface below: this is for
Chris@19 1322 transforms of contiguous @code{howmany}-tuples of numbers
Chris@19 1323 (@code{howmany = 1} in the basic interface).
Chris@19 1324
Chris@19 1325 The corresponding basic and advanced routines for one-dimensional
Chris@19 1326 transforms (currently only complex DFTs) are:
Chris@19 1327
Chris@19 1328 @findex fftw_mpi_local_size_1d
Chris@19 1329 @findex fftw_mpi_local_size_many_1d
Chris@19 1330 @example
Chris@19 1331 ptrdiff_t fftw_mpi_local_size_1d(
Chris@19 1332 ptrdiff_t n0, MPI_Comm comm, int sign, unsigned flags,
Chris@19 1333 ptrdiff_t *local_ni, ptrdiff_t *local_i_start,
Chris@19 1334 ptrdiff_t *local_no, ptrdiff_t *local_o_start);
Chris@19 1335 ptrdiff_t fftw_mpi_local_size_many_1d(
Chris@19 1336 ptrdiff_t n0, ptrdiff_t howmany,
Chris@19 1337 MPI_Comm comm, int sign, unsigned flags,
Chris@19 1338 ptrdiff_t *local_ni, ptrdiff_t *local_i_start,
Chris@19 1339 ptrdiff_t *local_no, ptrdiff_t *local_o_start);
Chris@19 1340 @end example
Chris@19 1341
Chris@19 1342 @ctindex FFTW_MPI_SCRAMBLED_OUT
Chris@19 1343 @ctindex FFTW_MPI_SCRAMBLED_IN
Chris@19 1344 As above, the return value is the number of elements to allocate
Chris@19 1345 (complex numbers, for complex DFTs). The @code{local_ni} and
Chris@19 1346 @code{local_i_start} arguments return the portion
Chris@19 1347 (@code{local_i_start} to @code{local_i_start + local_ni - 1}) of the
Chris@19 1348 1d array that is stored on this process for the transform
Chris@19 1349 @emph{input}, and @code{local_no} and @code{local_o_start} are the
Chris@19 1350 corresponding quantities for the input. The @code{sign}
Chris@19 1351 (@code{FFTW_FORWARD} or @code{FFTW_BACKWARD}) and @code{flags} must
Chris@19 1352 match the arguments passed when creating a plan. Although the inputs
Chris@19 1353 and outputs have different data distributions in general, it is
Chris@19 1354 guaranteed that the @emph{output} data distribution of an
Chris@19 1355 @code{FFTW_FORWARD} plan will match the @emph{input} data distribution
Chris@19 1356 of an @code{FFTW_BACKWARD} plan and vice versa; similarly for the
Chris@19 1357 @code{FFTW_MPI_SCRAMBLED_OUT} and @code{FFTW_MPI_SCRAMBLED_IN} flags.
Chris@19 1358 @xref{One-dimensional distributions}.
Chris@19 1359
Chris@19 1360 @node MPI Plan Creation, MPI Wisdom Communication, MPI Data Distribution Functions, FFTW MPI Reference
Chris@19 1361 @subsection MPI Plan Creation
Chris@19 1362
Chris@19 1363 @subsubheading Complex-data MPI DFTs
Chris@19 1364
Chris@19 1365 Plans for complex-data DFTs (@pxref{2d MPI example}) are created by:
Chris@19 1366
Chris@19 1367 @findex fftw_mpi_plan_dft_1d
Chris@19 1368 @findex fftw_mpi_plan_dft_2d
Chris@19 1369 @findex fftw_mpi_plan_dft_3d
Chris@19 1370 @findex fftw_mpi_plan_dft
Chris@19 1371 @findex fftw_mpi_plan_many_dft
Chris@19 1372 @example
Chris@19 1373 fftw_plan fftw_mpi_plan_dft_1d(ptrdiff_t n0, fftw_complex *in, fftw_complex *out,
Chris@19 1374 MPI_Comm comm, int sign, unsigned flags);
Chris@19 1375 fftw_plan fftw_mpi_plan_dft_2d(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1376 fftw_complex *in, fftw_complex *out,
Chris@19 1377 MPI_Comm comm, int sign, unsigned flags);
Chris@19 1378 fftw_plan fftw_mpi_plan_dft_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@19 1379 fftw_complex *in, fftw_complex *out,
Chris@19 1380 MPI_Comm comm, int sign, unsigned flags);
Chris@19 1381 fftw_plan fftw_mpi_plan_dft(int rnk, const ptrdiff_t *n,
Chris@19 1382 fftw_complex *in, fftw_complex *out,
Chris@19 1383 MPI_Comm comm, int sign, unsigned flags);
Chris@19 1384 fftw_plan fftw_mpi_plan_many_dft(int rnk, const ptrdiff_t *n,
Chris@19 1385 ptrdiff_t howmany, ptrdiff_t block, ptrdiff_t tblock,
Chris@19 1386 fftw_complex *in, fftw_complex *out,
Chris@19 1387 MPI_Comm comm, int sign, unsigned flags);
Chris@19 1388 @end example
Chris@19 1389
Chris@19 1390 @cindex MPI communicator
Chris@19 1391 @cindex collective function
Chris@19 1392 These are similar to their serial counterparts (@pxref{Complex DFTs})
Chris@19 1393 in specifying the dimensions, sign, and flags of the transform. The
Chris@19 1394 @code{comm} argument gives an MPI communicator that specifies the set
Chris@19 1395 of processes to participate in the transform; plan creation is a
Chris@19 1396 collective function that must be called for all processes in the
Chris@19 1397 communicator. The @code{in} and @code{out} pointers refer only to a
Chris@19 1398 portion of the overall transform data (@pxref{MPI Data Distribution})
Chris@19 1399 as specified by the @samp{local_size} functions in the previous
Chris@19 1400 section. Unless @code{flags} contains @code{FFTW_ESTIMATE}, these
Chris@19 1401 arrays are overwritten during plan creation as for the serial
Chris@19 1402 interface. For multi-dimensional transforms, any dimensions @code{>
Chris@19 1403 1} are supported; for one-dimensional transforms, only composite
Chris@19 1404 (non-prime) @code{n0} are currently supported (unlike the serial
Chris@19 1405 FFTW). Requesting an unsupported transform size will yield a
Chris@19 1406 @code{NULL} plan. (As in the serial interface, highly composite sizes
Chris@19 1407 generally yield the best performance.)
Chris@19 1408
Chris@19 1409 @cindex advanced interface
Chris@19 1410 @ctindex FFTW_MPI_DEFAULT_BLOCK
Chris@19 1411 @cindex stride
Chris@19 1412 The advanced-interface @code{fftw_mpi_plan_many_dft} additionally
Chris@19 1413 allows you to specify the block sizes for the first dimension
Chris@19 1414 (@code{block}) of the @ndims{} input data and the first dimension
Chris@19 1415 (@code{tblock}) of the @ndimstrans{} transposed data (at intermediate
Chris@19 1416 steps of the transform, and for the output if
Chris@19 1417 @code{FFTW_TRANSPOSED_OUT} is specified in @code{flags}). These must
Chris@19 1418 be the same block sizes as were passed to the corresponding
Chris@19 1419 @samp{local_size} function; you can pass @code{FFTW_MPI_DEFAULT_BLOCK}
Chris@19 1420 to use FFTW's default block size as in the basic interface. Also, the
Chris@19 1421 @code{howmany} parameter specifies that the transform is of contiguous
Chris@19 1422 @code{howmany}-tuples rather than individual complex numbers; this
Chris@19 1423 corresponds to the same parameter in the serial advanced interface
Chris@19 1424 (@pxref{Advanced Complex DFTs}) with @code{stride = howmany} and
Chris@19 1425 @code{dist = 1}.
Chris@19 1426
Chris@19 1427 @subsubheading MPI flags
Chris@19 1428
Chris@19 1429 The @code{flags} can be any of those for the serial FFTW
Chris@19 1430 (@pxref{Planner Flags}), and in addition may include one or more of
Chris@19 1431 the following MPI-specific flags, which improve performance at the
Chris@19 1432 cost of changing the output or input data formats.
Chris@19 1433
Chris@19 1434 @itemize @bullet
Chris@19 1435
Chris@19 1436 @item
Chris@19 1437 @ctindex FFTW_MPI_SCRAMBLED_OUT
Chris@19 1438 @ctindex FFTW_MPI_SCRAMBLED_IN
Chris@19 1439 @code{FFTW_MPI_SCRAMBLED_OUT}, @code{FFTW_MPI_SCRAMBLED_IN}: valid for
Chris@19 1440 1d transforms only, these flags indicate that the output/input of the
Chris@19 1441 transform are in an undocumented ``scrambled'' order. A forward
Chris@19 1442 @code{FFTW_MPI_SCRAMBLED_OUT} transform can be inverted by a backward
Chris@19 1443 @code{FFTW_MPI_SCRAMBLED_IN} (times the usual 1/@i{N} normalization).
Chris@19 1444 @xref{One-dimensional distributions}.
Chris@19 1445
Chris@19 1446 @item
Chris@19 1447 @ctindex FFTW_MPI_TRANSPOSED_OUT
Chris@19 1448 @ctindex FFTW_MPI_TRANSPOSED_IN
Chris@19 1449 @code{FFTW_MPI_TRANSPOSED_OUT}, @code{FFTW_MPI_TRANSPOSED_IN}: valid
Chris@19 1450 for multidimensional (@code{rnk > 1}) transforms only, these flags
Chris@19 1451 specify that the output or input of an @ndims{} transform is
Chris@19 1452 transposed to @ndimstrans{}. @xref{Transposed distributions}.
Chris@19 1453
Chris@19 1454 @end itemize
Chris@19 1455
Chris@19 1456 @subsubheading Real-data MPI DFTs
Chris@19 1457
Chris@19 1458 @cindex r2c
Chris@19 1459 Plans for real-input/output (r2c/c2r) DFTs (@pxref{Multi-dimensional
Chris@19 1460 MPI DFTs of Real Data}) are created by:
Chris@19 1461
Chris@19 1462 @findex fftw_mpi_plan_dft_r2c_2d
Chris@19 1463 @findex fftw_mpi_plan_dft_r2c_2d
Chris@19 1464 @findex fftw_mpi_plan_dft_r2c_3d
Chris@19 1465 @findex fftw_mpi_plan_dft_r2c
Chris@19 1466 @findex fftw_mpi_plan_dft_c2r_2d
Chris@19 1467 @findex fftw_mpi_plan_dft_c2r_2d
Chris@19 1468 @findex fftw_mpi_plan_dft_c2r_3d
Chris@19 1469 @findex fftw_mpi_plan_dft_c2r
Chris@19 1470 @example
Chris@19 1471 fftw_plan fftw_mpi_plan_dft_r2c_2d(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1472 double *in, fftw_complex *out,
Chris@19 1473 MPI_Comm comm, unsigned flags);
Chris@19 1474 fftw_plan fftw_mpi_plan_dft_r2c_2d(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1475 double *in, fftw_complex *out,
Chris@19 1476 MPI_Comm comm, unsigned flags);
Chris@19 1477 fftw_plan fftw_mpi_plan_dft_r2c_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@19 1478 double *in, fftw_complex *out,
Chris@19 1479 MPI_Comm comm, unsigned flags);
Chris@19 1480 fftw_plan fftw_mpi_plan_dft_r2c(int rnk, const ptrdiff_t *n,
Chris@19 1481 double *in, fftw_complex *out,
Chris@19 1482 MPI_Comm comm, unsigned flags);
Chris@19 1483 fftw_plan fftw_mpi_plan_dft_c2r_2d(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1484 fftw_complex *in, double *out,
Chris@19 1485 MPI_Comm comm, unsigned flags);
Chris@19 1486 fftw_plan fftw_mpi_plan_dft_c2r_2d(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1487 fftw_complex *in, double *out,
Chris@19 1488 MPI_Comm comm, unsigned flags);
Chris@19 1489 fftw_plan fftw_mpi_plan_dft_c2r_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@19 1490 fftw_complex *in, double *out,
Chris@19 1491 MPI_Comm comm, unsigned flags);
Chris@19 1492 fftw_plan fftw_mpi_plan_dft_c2r(int rnk, const ptrdiff_t *n,
Chris@19 1493 fftw_complex *in, double *out,
Chris@19 1494 MPI_Comm comm, unsigned flags);
Chris@19 1495 @end example
Chris@19 1496
Chris@19 1497 Similar to the serial interface (@pxref{Real-data DFTs}), these
Chris@19 1498 transform logically @ndims{} real data to/from @ndimshalf{} complex
Chris@19 1499 data, representing the non-redundant half of the conjugate-symmetry
Chris@19 1500 output of a real-input DFT (@pxref{Multi-dimensional Transforms}).
Chris@19 1501 However, the real array must be stored within a padded @ndimspad{}
Chris@19 1502 array (much like the in-place serial r2c transforms, but here for
Chris@19 1503 out-of-place transforms as well). Currently, only multi-dimensional
Chris@19 1504 (@code{rnk > 1}) r2c/c2r transforms are supported (requesting a plan
Chris@19 1505 for @code{rnk = 1} will yield @code{NULL}). As explained above
Chris@19 1506 (@pxref{Multi-dimensional MPI DFTs of Real Data}), the data
Chris@19 1507 distribution of both the real and complex arrays is given by the
Chris@19 1508 @samp{local_size} function called for the dimensions of the
Chris@19 1509 @emph{complex} array. Similar to the other planning functions, the
Chris@19 1510 input and output arrays are overwritten when the plan is created
Chris@19 1511 except in @code{FFTW_ESTIMATE} mode.
Chris@19 1512
Chris@19 1513 As for the complex DFTs above, there is an advance interface that
Chris@19 1514 allows you to manually specify block sizes and to transform contiguous
Chris@19 1515 @code{howmany}-tuples of real/complex numbers:
Chris@19 1516
Chris@19 1517 @findex fftw_mpi_plan_many_dft_r2c
Chris@19 1518 @findex fftw_mpi_plan_many_dft_c2r
Chris@19 1519 @example
Chris@19 1520 fftw_plan fftw_mpi_plan_many_dft_r2c
Chris@19 1521 (int rnk, const ptrdiff_t *n, ptrdiff_t howmany,
Chris@19 1522 ptrdiff_t iblock, ptrdiff_t oblock,
Chris@19 1523 double *in, fftw_complex *out,
Chris@19 1524 MPI_Comm comm, unsigned flags);
Chris@19 1525 fftw_plan fftw_mpi_plan_many_dft_c2r
Chris@19 1526 (int rnk, const ptrdiff_t *n, ptrdiff_t howmany,
Chris@19 1527 ptrdiff_t iblock, ptrdiff_t oblock,
Chris@19 1528 fftw_complex *in, double *out,
Chris@19 1529 MPI_Comm comm, unsigned flags);
Chris@19 1530 @end example
Chris@19 1531
Chris@19 1532 @subsubheading MPI r2r transforms
Chris@19 1533
Chris@19 1534 @cindex r2r
Chris@19 1535 There are corresponding plan-creation routines for r2r
Chris@19 1536 transforms (@pxref{More DFTs of Real Data}), currently supporting
Chris@19 1537 multidimensional (@code{rnk > 1}) transforms only (@code{rnk = 1} will
Chris@19 1538 yield a @code{NULL} plan):
Chris@19 1539
Chris@19 1540 @example
Chris@19 1541 fftw_plan fftw_mpi_plan_r2r_2d(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1542 double *in, double *out,
Chris@19 1543 MPI_Comm comm,
Chris@19 1544 fftw_r2r_kind kind0, fftw_r2r_kind kind1,
Chris@19 1545 unsigned flags);
Chris@19 1546 fftw_plan fftw_mpi_plan_r2r_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2,
Chris@19 1547 double *in, double *out,
Chris@19 1548 MPI_Comm comm,
Chris@19 1549 fftw_r2r_kind kind0, fftw_r2r_kind kind1, fftw_r2r_kind kind2,
Chris@19 1550 unsigned flags);
Chris@19 1551 fftw_plan fftw_mpi_plan_r2r(int rnk, const ptrdiff_t *n,
Chris@19 1552 double *in, double *out,
Chris@19 1553 MPI_Comm comm, const fftw_r2r_kind *kind,
Chris@19 1554 unsigned flags);
Chris@19 1555 fftw_plan fftw_mpi_plan_many_r2r(int rnk, const ptrdiff_t *n,
Chris@19 1556 ptrdiff_t iblock, ptrdiff_t oblock,
Chris@19 1557 double *in, double *out,
Chris@19 1558 MPI_Comm comm, const fftw_r2r_kind *kind,
Chris@19 1559 unsigned flags);
Chris@19 1560 @end example
Chris@19 1561
Chris@19 1562 The parameters are much the same as for the complex DFTs above, except
Chris@19 1563 that the arrays are of real numbers (and hence the outputs of the
Chris@19 1564 @samp{local_size} data-distribution functions should be interpreted as
Chris@19 1565 counts of real rather than complex numbers). Also, the @code{kind}
Chris@19 1566 parameters specify the r2r kinds along each dimension as for the
Chris@19 1567 serial interface (@pxref{Real-to-Real Transform Kinds}). @xref{Other
Chris@19 1568 Multi-dimensional Real-data MPI Transforms}.
Chris@19 1569
Chris@19 1570 @subsubheading MPI transposition
Chris@19 1571 @cindex transpose
Chris@19 1572
Chris@19 1573 FFTW also provides routines to plan a transpose of a distributed
Chris@19 1574 @code{n0} by @code{n1} array of real numbers, or an array of
Chris@19 1575 @code{howmany}-tuples of real numbers with specified block sizes
Chris@19 1576 (@pxref{FFTW MPI Transposes}):
Chris@19 1577
Chris@19 1578 @findex fftw_mpi_plan_transpose
Chris@19 1579 @findex fftw_mpi_plan_many_transpose
Chris@19 1580 @example
Chris@19 1581 fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1,
Chris@19 1582 double *in, double *out,
Chris@19 1583 MPI_Comm comm, unsigned flags);
Chris@19 1584 fftw_plan fftw_mpi_plan_many_transpose
Chris@19 1585 (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany,
Chris@19 1586 ptrdiff_t block0, ptrdiff_t block1,
Chris@19 1587 double *in, double *out, MPI_Comm comm, unsigned flags);
Chris@19 1588 @end example
Chris@19 1589
Chris@19 1590 @cindex new-array execution
Chris@19 1591 @findex fftw_mpi_execute_r2r
Chris@19 1592 These plans are used with the @code{fftw_mpi_execute_r2r} new-array
Chris@19 1593 execute function (@pxref{Using MPI Plans }), since they count as (rank
Chris@19 1594 zero) r2r plans from FFTW's perspective.
Chris@19 1595
Chris@19 1596 @node MPI Wisdom Communication, , MPI Plan Creation, FFTW MPI Reference
Chris@19 1597 @subsection MPI Wisdom Communication
Chris@19 1598
Chris@19 1599 To facilitate synchronizing wisdom among the different MPI processes,
Chris@19 1600 we provide two functions:
Chris@19 1601
Chris@19 1602 @findex fftw_mpi_gather_wisdom
Chris@19 1603 @findex fftw_mpi_broadcast_wisdom
Chris@19 1604 @example
Chris@19 1605 void fftw_mpi_gather_wisdom(MPI_Comm comm);
Chris@19 1606 void fftw_mpi_broadcast_wisdom(MPI_Comm comm);
Chris@19 1607 @end example
Chris@19 1608
Chris@19 1609 The @code{fftw_mpi_gather_wisdom} function gathers all wisdom in the
Chris@19 1610 given communicator @code{comm} to the process of rank 0 in the
Chris@19 1611 communicator: that process obtains the union of all wisdom on all the
Chris@19 1612 processes. As a side effect, some other processes will gain
Chris@19 1613 additional wisdom from other processes, but only process 0 will gain
Chris@19 1614 the complete union.
Chris@19 1615
Chris@19 1616 The @code{fftw_mpi_broadcast_wisdom} does the reverse: it exports
Chris@19 1617 wisdom from process 0 in @code{comm} to all other processes in the
Chris@19 1618 communicator, replacing any wisdom they currently have.
Chris@19 1619
Chris@19 1620 @xref{FFTW MPI Wisdom}.
Chris@19 1621
Chris@19 1622 @c ------------------------------------------------------------
Chris@19 1623 @node FFTW MPI Fortran Interface, , FFTW MPI Reference, Distributed-memory FFTW with MPI
Chris@19 1624 @section FFTW MPI Fortran Interface
Chris@19 1625 @cindex Fortran interface
Chris@19 1626
Chris@19 1627 @cindex iso_c_binding
Chris@19 1628 The FFTW MPI interface is callable from modern Fortran compilers
Chris@19 1629 supporting the Fortran 2003 @code{iso_c_binding} standard for calling
Chris@19 1630 C functions. As described in @ref{Calling FFTW from Modern Fortran},
Chris@19 1631 this means that you can directly call FFTW's C interface from Fortran
Chris@19 1632 with only minor changes in syntax. There are, however, a few things
Chris@19 1633 specific to the MPI interface to keep in mind:
Chris@19 1634
Chris@19 1635 @itemize @bullet
Chris@19 1636
Chris@19 1637 @item
Chris@19 1638 Instead of including @code{fftw3.f03} as in @ref{Overview of Fortran
Chris@19 1639 interface }, you should @code{include 'fftw3-mpi.f03'} (after
Chris@19 1640 @code{use, intrinsic :: iso_c_binding} as before). The
Chris@19 1641 @code{fftw3-mpi.f03} file includes @code{fftw3.f03}, so you should
Chris@19 1642 @emph{not} @code{include} them both yourself. (You will also want to
Chris@19 1643 include the MPI header file, usually via @code{include 'mpif.h'} or
Chris@19 1644 similar, although though this is not needed by @code{fftw3-mpi.f03}
Chris@19 1645 @i{per se}.) (To use the @samp{fftwl_} @code{long double} extended-precision routines in supporting compilers, you should include @code{fftw3f-mpi.f03} in @emph{addition} to @code{fftw3-mpi.f03}. @xref{Extended and quadruple precision in Fortran}.)
Chris@19 1646
Chris@19 1647 @item
Chris@19 1648 Because of the different storage conventions between C and Fortran,
Chris@19 1649 you reverse the order of your array dimensions when passing them to
Chris@19 1650 FFTW (@pxref{Reversing array dimensions}). This is merely a
Chris@19 1651 difference in notation and incurs no performance overhead. However,
Chris@19 1652 it means that, whereas in C the @emph{first} dimension is distributed,
Chris@19 1653 in Fortran the @emph{last} dimension of your array is distributed.
Chris@19 1654
Chris@19 1655 @item
Chris@19 1656 @cindex MPI communicator
Chris@19 1657 In Fortran, communicators are stored as @code{integer} types; there is
Chris@19 1658 no @code{MPI_Comm} type, nor is there any way to access a C
Chris@19 1659 @code{MPI_Comm}. Fortunately, this is taken care of for you by the
Chris@19 1660 FFTW Fortran interface: whenever the C interface expects an
Chris@19 1661 @code{MPI_Comm} type, you should pass the Fortran communicator as an
Chris@19 1662 @code{integer}.@footnote{Technically, this is because you aren't
Chris@19 1663 actually calling the C functions directly. You are calling wrapper
Chris@19 1664 functions that translate the communicator with @code{MPI_Comm_f2c}
Chris@19 1665 before calling the ordinary C interface. This is all done
Chris@19 1666 transparently, however, since the @code{fftw3-mpi.f03} interface file
Chris@19 1667 renames the wrappers so that they are called in Fortran with the same
Chris@19 1668 names as the C interface functions.}
Chris@19 1669
Chris@19 1670 @item
Chris@19 1671 Because you need to call the @samp{local_size} function to find out
Chris@19 1672 how much space to allocate, and this may be @emph{larger} than the
Chris@19 1673 local portion of the array (@pxref{MPI Data Distribution}), you should
Chris@19 1674 @emph{always} allocate your arrays dynamically using FFTW's allocation
Chris@19 1675 routines as described in @ref{Allocating aligned memory in Fortran}.
Chris@19 1676 (Coincidentally, this also provides the best performance by
Chris@19 1677 guaranteeding proper data alignment.)
Chris@19 1678
Chris@19 1679 @item
Chris@19 1680 Because all sizes in the MPI FFTW interface are declared as
Chris@19 1681 @code{ptrdiff_t} in C, you should use @code{integer(C_INTPTR_T)} in
Chris@19 1682 Fortran (@pxref{FFTW Fortran type reference}).
Chris@19 1683
Chris@19 1684 @item
Chris@19 1685 @findex fftw_execute_dft
Chris@19 1686 @findex fftw_mpi_execute_dft
Chris@19 1687 @cindex new-array execution
Chris@19 1688 In Fortran, because of the language semantics, we generally recommend
Chris@19 1689 using the new-array execute functions for all plans, even in the
Chris@19 1690 common case where you are executing the plan on the same arrays for
Chris@19 1691 which the plan was created (@pxref{Plan execution in Fortran}).
Chris@19 1692 However, note that in the MPI interface these functions are changed:
Chris@19 1693 @code{fftw_execute_dft} becomes @code{fftw_mpi_execute_dft},
Chris@19 1694 etcetera. @xref{Using MPI Plans}.
Chris@19 1695
Chris@19 1696 @end itemize
Chris@19 1697
Chris@19 1698 For example, here is a Fortran code snippet to perform a distributed
Chris@19 1699 @twodims{L,M} complex DFT in-place. (This assumes you have already
Chris@19 1700 initialized MPI with @code{MPI_init} and have also performed
Chris@19 1701 @code{call fftw_mpi_init}.)
Chris@19 1702
Chris@19 1703 @example
Chris@19 1704 use, intrinsic :: iso_c_binding
Chris@19 1705 include 'fftw3-mpi.f03'
Chris@19 1706 integer(C_INTPTR_T), parameter :: L = ...
Chris@19 1707 integer(C_INTPTR_T), parameter :: M = ...
Chris@19 1708 type(C_PTR) :: plan, cdata
Chris@19 1709 complex(C_DOUBLE_COMPLEX), pointer :: data(:,:)
Chris@19 1710 integer(C_INTPTR_T) :: i, j, alloc_local, local_M, local_j_offset
Chris@19 1711
Chris@19 1712 ! @r{get local data size and allocate (note dimension reversal)}
Chris@19 1713 alloc_local = fftw_mpi_local_size_2d(M, L, MPI_COMM_WORLD, &
Chris@19 1714 local_M, local_j_offset)
Chris@19 1715 cdata = fftw_alloc_complex(alloc_local)
Chris@19 1716 call c_f_pointer(cdata, data, [L,local_M])
Chris@19 1717
Chris@19 1718 ! @r{create MPI plan for in-place forward DFT (note dimension reversal)}
Chris@19 1719 plan = fftw_mpi_plan_dft_2d(M, L, data, data, MPI_COMM_WORLD, &
Chris@19 1720 FFTW_FORWARD, FFTW_MEASURE)
Chris@19 1721
Chris@19 1722 ! @r{initialize data to some function} my_function(i,j)
Chris@19 1723 do j = 1, local_M
Chris@19 1724 do i = 1, L
Chris@19 1725 data(i, j) = my_function(i, j + local_j_offset)
Chris@19 1726 end do
Chris@19 1727 end do
Chris@19 1728
Chris@19 1729 ! @r{compute transform (as many times as desired)}
Chris@19 1730 call fftw_mpi_execute_dft(plan, data, data)
Chris@19 1731
Chris@19 1732 call fftw_destroy_plan(plan)
Chris@19 1733 call fftw_free(cdata)
Chris@19 1734 @end example
Chris@19 1735
Chris@19 1736 Note that when we called @code{fftw_mpi_local_size_2d} and
Chris@19 1737 @code{fftw_mpi_plan_dft_2d} with the dimensions in reversed order,
Chris@19 1738 since a @twodims{L,M} Fortran array is viewed by FFTW in C as a
Chris@19 1739 @twodims{M, L} array. This means that the array was distributed over
Chris@19 1740 the @code{M} dimension, the local portion of which is a
Chris@19 1741 @twodims{L,local_M} array in Fortran. (You must @emph{not} use an
Chris@19 1742 @code{allocate} statement to allocate an @twodims{L,local_M} array,
Chris@19 1743 however; you must allocate @code{alloc_local} complex numbers, which
Chris@19 1744 may be greater than @code{L * local_M}, in order to reserve space for
Chris@19 1745 intermediate steps of the transform.) Finally, we mention that
Chris@19 1746 because C's array indices are zero-based, the @code{local_j_offset}
Chris@19 1747 argument can conveniently be interpreted as an offset in the 1-based
Chris@19 1748 @code{j} index (rather than as a starting index as in C).
Chris@19 1749
Chris@19 1750 If instead you had used the @code{ior(FFTW_MEASURE,
Chris@19 1751 FFTW_MPI_TRANSPOSED_OUT)} flag, the output of the transform would be a
Chris@19 1752 transposed @twodims{M,local_L} array, associated with the @emph{same}
Chris@19 1753 @code{cdata} allocation (since the transform is in-place), and which
Chris@19 1754 you could declare with:
Chris@19 1755
Chris@19 1756 @example
Chris@19 1757 complex(C_DOUBLE_COMPLEX), pointer :: tdata(:,:)
Chris@19 1758 ...
Chris@19 1759 call c_f_pointer(cdata, tdata, [M,local_L])
Chris@19 1760 @end example
Chris@19 1761
Chris@19 1762 where @code{local_L} would have been obtained by changing the
Chris@19 1763 @code{fftw_mpi_local_size_2d} call to:
Chris@19 1764
Chris@19 1765 @example
Chris@19 1766 alloc_local = fftw_mpi_local_size_2d_transposed(M, L, MPI_COMM_WORLD, &
Chris@19 1767 local_M, local_j_offset, local_L, local_i_offset)
Chris@19 1768 @end example