annotate src/fftw-3.3.3/doc/html/2d-MPI-example.html @ 95:89f5e221ed7b

Add FFTW3
author Chris Cannam <cannam@all-day-breakfast.com>
date Wed, 20 Mar 2013 15:35:50 +0000
parents
children
rev   line source
cannam@95 1 <html lang="en">
cannam@95 2 <head>
cannam@95 3 <title>2d MPI example - FFTW 3.3.3</title>
cannam@95 4 <meta http-equiv="Content-Type" content="text/html">
cannam@95 5 <meta name="description" content="FFTW 3.3.3">
cannam@95 6 <meta name="generator" content="makeinfo 4.13">
cannam@95 7 <link title="Top" rel="start" href="index.html#Top">
cannam@95 8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
cannam@95 9 <link rel="prev" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" title="Linking and Initializing MPI FFTW">
cannam@95 10 <link rel="next" href="MPI-Data-Distribution.html#MPI-Data-Distribution" title="MPI Data Distribution">
cannam@95 11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
cannam@95 12 <!--
cannam@95 13 This manual is for FFTW
cannam@95 14 (version 3.3.3, 25 November 2012).
cannam@95 15
cannam@95 16 Copyright (C) 2003 Matteo Frigo.
cannam@95 17
cannam@95 18 Copyright (C) 2003 Massachusetts Institute of Technology.
cannam@95 19
cannam@95 20 Permission is granted to make and distribute verbatim copies of
cannam@95 21 this manual provided the copyright notice and this permission
cannam@95 22 notice are preserved on all copies.
cannam@95 23
cannam@95 24 Permission is granted to copy and distribute modified versions of
cannam@95 25 this manual under the conditions for verbatim copying, provided
cannam@95 26 that the entire resulting derived work is distributed under the
cannam@95 27 terms of a permission notice identical to this one.
cannam@95 28
cannam@95 29 Permission is granted to copy and distribute translations of this
cannam@95 30 manual into another language, under the above conditions for
cannam@95 31 modified versions, except that this permission notice may be
cannam@95 32 stated in a translation approved by the Free Software Foundation.
cannam@95 33 -->
cannam@95 34 <meta http-equiv="Content-Style-Type" content="text/css">
cannam@95 35 <style type="text/css"><!--
cannam@95 36 pre.display { font-family:inherit }
cannam@95 37 pre.format { font-family:inherit }
cannam@95 38 pre.smalldisplay { font-family:inherit; font-size:smaller }
cannam@95 39 pre.smallformat { font-family:inherit; font-size:smaller }
cannam@95 40 pre.smallexample { font-size:smaller }
cannam@95 41 pre.smalllisp { font-size:smaller }
cannam@95 42 span.sc { font-variant:small-caps }
cannam@95 43 span.roman { font-family:serif; font-weight:normal; }
cannam@95 44 span.sansserif { font-family:sans-serif; font-weight:normal; }
cannam@95 45 --></style>
cannam@95 46 </head>
cannam@95 47 <body>
cannam@95 48 <div class="node">
cannam@95 49 <a name="g_t2d-MPI-example"></a>
cannam@95 50 <p>
cannam@95 51 Next:&nbsp;<a rel="next" accesskey="n" href="MPI-Data-Distribution.html#MPI-Data-Distribution">MPI Data Distribution</a>,
cannam@95 52 Previous:&nbsp;<a rel="previous" accesskey="p" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW">Linking and Initializing MPI FFTW</a>,
cannam@95 53 Up:&nbsp;<a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
cannam@95 54 <hr>
cannam@95 55 </div>
cannam@95 56
cannam@95 57 <h3 class="section">6.3 2d MPI example</h3>
cannam@95 58
cannam@95 59 <p>Before we document the FFTW MPI interface in detail, we begin with a
cannam@95 60 simple example outlining how one would perform a two-dimensional
cannam@95 61 <code>N0</code> by <code>N1</code> complex DFT.
cannam@95 62
cannam@95 63 <pre class="example"> #include &lt;fftw3-mpi.h&gt;
cannam@95 64
cannam@95 65 int main(int argc, char **argv)
cannam@95 66 {
cannam@95 67 const ptrdiff_t N0 = ..., N1 = ...;
cannam@95 68 fftw_plan plan;
cannam@95 69 fftw_complex *data;
cannam@95 70 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
cannam@95 71
cannam@95 72 MPI_Init(&amp;argc, &amp;argv);
cannam@95 73 fftw_mpi_init();
cannam@95 74
cannam@95 75 /* <span class="roman">get local data size and allocate</span> */
cannam@95 76 alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
cannam@95 77 &amp;local_n0, &amp;local_0_start);
cannam@95 78 data = fftw_alloc_complex(alloc_local);
cannam@95 79
cannam@95 80 /* <span class="roman">create plan for in-place forward DFT</span> */
cannam@95 81 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
cannam@95 82 FFTW_FORWARD, FFTW_ESTIMATE);
cannam@95 83
cannam@95 84 /* <span class="roman">initialize data to some function</span> my_function(x,y) */
cannam@95 85 for (i = 0; i &lt; local_n0; ++i) for (j = 0; j &lt; N1; ++j)
cannam@95 86 data[i*N1 + j] = my_function(local_0_start + i, j);
cannam@95 87
cannam@95 88 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */
cannam@95 89 fftw_execute(plan);
cannam@95 90
cannam@95 91 fftw_destroy_plan(plan);
cannam@95 92
cannam@95 93 MPI_Finalize();
cannam@95 94 }
cannam@95 95 </pre>
cannam@95 96 <p>As can be seen above, the MPI interface follows the same basic style
cannam@95 97 of allocate/plan/execute/destroy as the serial FFTW routines. All of
cannam@95 98 the MPI-specific routines are prefixed with &lsquo;<samp><span class="samp">fftw_mpi_</span></samp>&rsquo; instead
cannam@95 99 of &lsquo;<samp><span class="samp">fftw_</span></samp>&rsquo;. There are a few important differences, however:
cannam@95 100
cannam@95 101 <p>First, we must call <code>fftw_mpi_init()</code> after calling
cannam@95 102 <code>MPI_Init</code> (required in all MPI programs) and before calling any
cannam@95 103 other &lsquo;<samp><span class="samp">fftw_mpi_</span></samp>&rsquo; routine.
cannam@95 104 <a name="index-MPI_005fInit-357"></a><a name="index-fftw_005fmpi_005finit-358"></a>
cannam@95 105
cannam@95 106 <p>Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>,
cannam@95 107 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument:
cannam@95 108 the communicator, indicating which processes will participate in the
cannam@95 109 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes).
cannam@95 110 Whenever you create, execute, or destroy a plan for an MPI transform,
cannam@95 111 you must call the corresponding FFTW routine on <em>all</em> processes
cannam@95 112 in the communicator for that transform. (That is, these are
cannam@95 113 <em>collective</em> calls.) Note that the plan for the MPI transform
cannam@95 114 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code> routines
cannam@95 115 (on the other hand, there are MPI-specific new-array execute functions
cannam@95 116 documented below).
cannam@95 117 <a name="index-collective-function-359"></a><a name="index-fftw_005fmpi_005fplan_005fdft_005f2d-360"></a><a name="index-MPI_005fCOMM_005fWORLD-361"></a>
cannam@95 118
cannam@95 119 <p>Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments
cannam@95 120 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a
cannam@95 121 standard C integer type which is (at least) 32 bits wide on a 32-bit
cannam@95 122 machine and 64 bits wide on a 64-bit machine. This is to make it easy
cannam@95 123 to specify very large parallel transforms on a 64-bit machine. (You
cannam@95 124 can specify 64-bit transform sizes in the serial FFTW, too, but only
cannam@95 125 by using the &lsquo;<samp><span class="samp">guru64</span></samp>&rsquo; planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.)
cannam@95 126 <a name="index-ptrdiff_005ft-362"></a><a name="index-g_t64_002dbit-architecture-363"></a>
cannam@95 127
cannam@95 128 <p>Fourth, and most importantly, you don't allocate the entire
cannam@95 129 two-dimensional array on each process. Instead, you call
cannam@95 130 <code>fftw_mpi_local_size_2d</code> to find out what <em>portion</em> of the
cannam@95 131 array resides on each processor, and how much space to allocate.
cannam@95 132 Here, the portion of the array on each process is a <code>local_n0</code> by
cannam@95 133 <code>N1</code> slice of the total array, starting at index
cannam@95 134 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers
cannam@95 135 to allocate is given by the <code>alloc_local</code> return value, which
cannam@95 136 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some
cannam@95 137 intermediate calculations require additional storage). The data
cannam@95 138 distribution in FFTW's MPI interface is described in more detail by
cannam@95 139 the next section.
cannam@95 140 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d-364"></a><a name="index-data-distribution-365"></a>
cannam@95 141
cannam@95 142 <p>Given the portion of the array that resides on the local process, it
cannam@95 143 is straightforward to initialize the data (here to a function
cannam@95 144 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end
cannam@95 145 of the program you may want to output the data somehow, but
cannam@95 146 synchronizing this output is up to you and is beyond the scope of this
cannam@95 147 manual. (One good way to output a large multi-dimensional distributed
cannam@95 148 array in MPI to a portable binary file is to use the free HDF5
cannam@95 149 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.)
cannam@95 150 <a name="index-HDF5-366"></a><a name="index-MPI-I_002fO-367"></a>
cannam@95 151 <!-- -->
cannam@95 152
cannam@95 153 </body></html>
cannam@95 154