cannam@95: cannam@95:
cannam@95:cannam@95: Next: Calling FFTW from Modern Fortran, cannam@95: Previous: Multi-threaded FFTW, cannam@95: Up: Top cannam@95:
cannam@95: In this chapter we document the parallel FFTW routines for parallel cannam@95: systems supporting the MPI message-passing interface. Unlike the cannam@95: shared-memory threads described in the previous chapter, MPI allows cannam@95: you to use distributed-memory parallelism, where each CPU has cannam@95: its own separate memory, and which can scale up to clusters of many cannam@95: thousands of processors. This capability comes at a price, however: cannam@95: each process only stores a portion of the data to be cannam@95: transformed, which means that the data structures and cannam@95: programming-interface are quite different from the serial or threads cannam@95: versions of FFTW. cannam@95: cannam@95: cannam@95:
Distributed-memory parallelism is especially useful when you are cannam@95: transforming arrays so large that they do not fit into the memory of a cannam@95: single processor. The storage per-process required by FFTW's MPI cannam@95: routines is proportional to the total array size divided by the number cannam@95: of processes. Conversely, distributed-memory parallelism can easily cannam@95: pose an unacceptably high communications overhead for small problems; cannam@95: the threshold problem size for which parallelism becomes advantageous cannam@95: will depend on the precise problem you are interested in, your cannam@95: hardware, and your MPI implementation. cannam@95: cannam@95:
A note on terminology: in MPI, you divide the data among a set of
cannam@95: “processes” which each run in their own memory address space.
cannam@95: Generally, each process runs on a different physical processor, but
cannam@95: this is not required. A set of processes in MPI is described by an
cannam@95: opaque data structure called a “communicator,” the most common of
cannam@95: which is the predefined communicator MPI_COMM_WORLD
which
cannam@95: refers to all processes. For more information on these and
cannam@95: other concepts common to all MPI programs, we refer the reader to the
cannam@95: documentation at the MPI home page.
cannam@95:
cannam@95:
cannam@95:
We assume in this chapter that the reader is familiar with the usage cannam@95: of the serial (uniprocessor) FFTW, and focus only on the concepts new cannam@95: to the MPI interface. cannam@95: cannam@95:
cannam@95: cannam@95: cannam@95: cannam@95: