annotate src/fftw-3.3.3/doc/html/Distributed_002dmemory-FFTW-with-MPI.html @ 10:37bf6b4a2645

Add FFTW3
author Chris Cannam
date Wed, 20 Mar 2013 15:35:50 +0000
parents
children
rev   line source
Chris@10 1 <html lang="en">
Chris@10 2 <head>
Chris@10 3 <title>Distributed-memory FFTW with MPI - FFTW 3.3.3</title>
Chris@10 4 <meta http-equiv="Content-Type" content="text/html">
Chris@10 5 <meta name="description" content="FFTW 3.3.3">
Chris@10 6 <meta name="generator" content="makeinfo 4.13">
Chris@10 7 <link title="Top" rel="start" href="index.html#Top">
Chris@10 8 <link rel="prev" href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW" title="Multi-threaded FFTW">
Chris@10 9 <link rel="next" href="Calling-FFTW-from-Modern-Fortran.html#Calling-FFTW-from-Modern-Fortran" title="Calling FFTW from Modern Fortran">
Chris@10 10 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
Chris@10 11 <!--
Chris@10 12 This manual is for FFTW
Chris@10 13 (version 3.3.3, 25 November 2012).
Chris@10 14
Chris@10 15 Copyright (C) 2003 Matteo Frigo.
Chris@10 16
Chris@10 17 Copyright (C) 2003 Massachusetts Institute of Technology.
Chris@10 18
Chris@10 19 Permission is granted to make and distribute verbatim copies of
Chris@10 20 this manual provided the copyright notice and this permission
Chris@10 21 notice are preserved on all copies.
Chris@10 22
Chris@10 23 Permission is granted to copy and distribute modified versions of
Chris@10 24 this manual under the conditions for verbatim copying, provided
Chris@10 25 that the entire resulting derived work is distributed under the
Chris@10 26 terms of a permission notice identical to this one.
Chris@10 27
Chris@10 28 Permission is granted to copy and distribute translations of this
Chris@10 29 manual into another language, under the above conditions for
Chris@10 30 modified versions, except that this permission notice may be
Chris@10 31 stated in a translation approved by the Free Software Foundation.
Chris@10 32 -->
Chris@10 33 <meta http-equiv="Content-Style-Type" content="text/css">
Chris@10 34 <style type="text/css"><!--
Chris@10 35 pre.display { font-family:inherit }
Chris@10 36 pre.format { font-family:inherit }
Chris@10 37 pre.smalldisplay { font-family:inherit; font-size:smaller }
Chris@10 38 pre.smallformat { font-family:inherit; font-size:smaller }
Chris@10 39 pre.smallexample { font-size:smaller }
Chris@10 40 pre.smalllisp { font-size:smaller }
Chris@10 41 span.sc { font-variant:small-caps }
Chris@10 42 span.roman { font-family:serif; font-weight:normal; }
Chris@10 43 span.sansserif { font-family:sans-serif; font-weight:normal; }
Chris@10 44 --></style>
Chris@10 45 </head>
Chris@10 46 <body>
Chris@10 47 <div class="node">
Chris@10 48 <a name="Distributed-memory-FFTW-with-MPI"></a>
Chris@10 49 <a name="Distributed_002dmemory-FFTW-with-MPI"></a>
Chris@10 50 <p>
Chris@10 51 Next:&nbsp;<a rel="next" accesskey="n" href="Calling-FFTW-from-Modern-Fortran.html#Calling-FFTW-from-Modern-Fortran">Calling FFTW from Modern Fortran</a>,
Chris@10 52 Previous:&nbsp;<a rel="previous" accesskey="p" href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>,
Chris@10 53 Up:&nbsp;<a rel="up" accesskey="u" href="index.html#Top">Top</a>
Chris@10 54 <hr>
Chris@10 55 </div>
Chris@10 56
Chris@10 57 <h2 class="chapter">6 Distributed-memory FFTW with MPI</h2>
Chris@10 58
Chris@10 59 <p><a name="index-MPI-344"></a>
Chris@10 60 <a name="index-parallel-transform-345"></a>In this chapter we document the parallel FFTW routines for parallel
Chris@10 61 systems supporting the MPI message-passing interface. Unlike the
Chris@10 62 shared-memory threads described in the previous chapter, MPI allows
Chris@10 63 you to use <em>distributed-memory</em> parallelism, where each CPU has
Chris@10 64 its own separate memory, and which can scale up to clusters of many
Chris@10 65 thousands of processors. This capability comes at a price, however:
Chris@10 66 each process only stores a <em>portion</em> of the data to be
Chris@10 67 transformed, which means that the data structures and
Chris@10 68 programming-interface are quite different from the serial or threads
Chris@10 69 versions of FFTW.
Chris@10 70 <a name="index-data-distribution-346"></a>
Chris@10 71
Chris@10 72 <p>Distributed-memory parallelism is especially useful when you are
Chris@10 73 transforming arrays so large that they do not fit into the memory of a
Chris@10 74 single processor. The storage per-process required by FFTW's MPI
Chris@10 75 routines is proportional to the total array size divided by the number
Chris@10 76 of processes. Conversely, distributed-memory parallelism can easily
Chris@10 77 pose an unacceptably high communications overhead for small problems;
Chris@10 78 the threshold problem size for which parallelism becomes advantageous
Chris@10 79 will depend on the precise problem you are interested in, your
Chris@10 80 hardware, and your MPI implementation.
Chris@10 81
Chris@10 82 <p>A note on terminology: in MPI, you divide the data among a set of
Chris@10 83 &ldquo;processes&rdquo; which each run in their own memory address space.
Chris@10 84 Generally, each process runs on a different physical processor, but
Chris@10 85 this is not required. A set of processes in MPI is described by an
Chris@10 86 opaque data structure called a &ldquo;communicator,&rdquo; the most common of
Chris@10 87 which is the predefined communicator <code>MPI_COMM_WORLD</code> which
Chris@10 88 refers to <em>all</em> processes. For more information on these and
Chris@10 89 other concepts common to all MPI programs, we refer the reader to the
Chris@10 90 documentation at <a href="http://www.mcs.anl.gov/research/projects/mpi/">the MPI home page</a>.
Chris@10 91 <a name="index-MPI-communicator-347"></a><a name="index-MPI_005fCOMM_005fWORLD-348"></a>
Chris@10 92
Chris@10 93 <p>We assume in this chapter that the reader is familiar with the usage
Chris@10 94 of the serial (uniprocessor) FFTW, and focus only on the concepts new
Chris@10 95 to the MPI interface.
Chris@10 96
Chris@10 97 <ul class="menu">
Chris@10 98 <li><a accesskey="1" href="FFTW-MPI-Installation.html#FFTW-MPI-Installation">FFTW MPI Installation</a>
Chris@10 99 <li><a accesskey="2" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW">Linking and Initializing MPI FFTW</a>
Chris@10 100 <li><a accesskey="3" href="2d-MPI-example.html#g_t2d-MPI-example">2d MPI example</a>
Chris@10 101 <li><a accesskey="4" href="MPI-Data-Distribution.html#MPI-Data-Distribution">MPI Data Distribution</a>
Chris@10 102 <li><a accesskey="5" href="Multi_002ddimensional-MPI-DFTs-of-Real-Data.html#Multi_002ddimensional-MPI-DFTs-of-Real-Data">Multi-dimensional MPI DFTs of Real Data</a>
Chris@10 103 <li><a accesskey="6" href="Other-Multi_002ddimensional-Real_002ddata-MPI-Transforms.html#Other-Multi_002ddimensional-Real_002ddata-MPI-Transforms">Other Multi-dimensional Real-data MPI Transforms</a>
Chris@10 104 <li><a accesskey="7" href="FFTW-MPI-Transposes.html#FFTW-MPI-Transposes">FFTW MPI Transposes</a>
Chris@10 105 <li><a accesskey="8" href="FFTW-MPI-Wisdom.html#FFTW-MPI-Wisdom">FFTW MPI Wisdom</a>
Chris@10 106 <li><a accesskey="9" href="Avoiding-MPI-Deadlocks.html#Avoiding-MPI-Deadlocks">Avoiding MPI Deadlocks</a>
Chris@10 107 <li><a href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips">FFTW MPI Performance Tips</a>
Chris@10 108 <li><a href="Combining-MPI-and-Threads.html#Combining-MPI-and-Threads">Combining MPI and Threads</a>
Chris@10 109 <li><a href="FFTW-MPI-Reference.html#FFTW-MPI-Reference">FFTW MPI Reference</a>
Chris@10 110 <li><a href="FFTW-MPI-Fortran-Interface.html#FFTW-MPI-Fortran-Interface">FFTW MPI Fortran Interface</a>
Chris@10 111 </ul>
Chris@10 112
Chris@10 113 <!-- -->
Chris@10 114 </body></html>
Chris@10 115