annotate src/fftw-3.3.3/doc/html/2d-MPI-example.html @ 83:ae30d91d2ffe

Replace these with versions built using an older toolset (so as to avoid ABI compatibilities when linking on Ubuntu 14.04 for packaging purposes)
author Chris Cannam
date Fri, 07 Feb 2020 11:51:13 +0000
parents 37bf6b4a2645
children
rev   line source
Chris@10 1 <html lang="en">
Chris@10 2 <head>
Chris@10 3 <title>2d MPI example - FFTW 3.3.3</title>
Chris@10 4 <meta http-equiv="Content-Type" content="text/html">
Chris@10 5 <meta name="description" content="FFTW 3.3.3">
Chris@10 6 <meta name="generator" content="makeinfo 4.13">
Chris@10 7 <link title="Top" rel="start" href="index.html#Top">
Chris@10 8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
Chris@10 9 <link rel="prev" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" title="Linking and Initializing MPI FFTW">
Chris@10 10 <link rel="next" href="MPI-Data-Distribution.html#MPI-Data-Distribution" title="MPI Data Distribution">
Chris@10 11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
Chris@10 12 <!--
Chris@10 13 This manual is for FFTW
Chris@10 14 (version 3.3.3, 25 November 2012).
Chris@10 15
Chris@10 16 Copyright (C) 2003 Matteo Frigo.
Chris@10 17
Chris@10 18 Copyright (C) 2003 Massachusetts Institute of Technology.
Chris@10 19
Chris@10 20 Permission is granted to make and distribute verbatim copies of
Chris@10 21 this manual provided the copyright notice and this permission
Chris@10 22 notice are preserved on all copies.
Chris@10 23
Chris@10 24 Permission is granted to copy and distribute modified versions of
Chris@10 25 this manual under the conditions for verbatim copying, provided
Chris@10 26 that the entire resulting derived work is distributed under the
Chris@10 27 terms of a permission notice identical to this one.
Chris@10 28
Chris@10 29 Permission is granted to copy and distribute translations of this
Chris@10 30 manual into another language, under the above conditions for
Chris@10 31 modified versions, except that this permission notice may be
Chris@10 32 stated in a translation approved by the Free Software Foundation.
Chris@10 33 -->
Chris@10 34 <meta http-equiv="Content-Style-Type" content="text/css">
Chris@10 35 <style type="text/css"><!--
Chris@10 36 pre.display { font-family:inherit }
Chris@10 37 pre.format { font-family:inherit }
Chris@10 38 pre.smalldisplay { font-family:inherit; font-size:smaller }
Chris@10 39 pre.smallformat { font-family:inherit; font-size:smaller }
Chris@10 40 pre.smallexample { font-size:smaller }
Chris@10 41 pre.smalllisp { font-size:smaller }
Chris@10 42 span.sc { font-variant:small-caps }
Chris@10 43 span.roman { font-family:serif; font-weight:normal; }
Chris@10 44 span.sansserif { font-family:sans-serif; font-weight:normal; }
Chris@10 45 --></style>
Chris@10 46 </head>
Chris@10 47 <body>
Chris@10 48 <div class="node">
Chris@10 49 <a name="g_t2d-MPI-example"></a>
Chris@10 50 <p>
Chris@10 51 Next:&nbsp;<a rel="next" accesskey="n" href="MPI-Data-Distribution.html#MPI-Data-Distribution">MPI Data Distribution</a>,
Chris@10 52 Previous:&nbsp;<a rel="previous" accesskey="p" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW">Linking and Initializing MPI FFTW</a>,
Chris@10 53 Up:&nbsp;<a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
Chris@10 54 <hr>
Chris@10 55 </div>
Chris@10 56
Chris@10 57 <h3 class="section">6.3 2d MPI example</h3>
Chris@10 58
Chris@10 59 <p>Before we document the FFTW MPI interface in detail, we begin with a
Chris@10 60 simple example outlining how one would perform a two-dimensional
Chris@10 61 <code>N0</code> by <code>N1</code> complex DFT.
Chris@10 62
Chris@10 63 <pre class="example"> #include &lt;fftw3-mpi.h&gt;
Chris@10 64
Chris@10 65 int main(int argc, char **argv)
Chris@10 66 {
Chris@10 67 const ptrdiff_t N0 = ..., N1 = ...;
Chris@10 68 fftw_plan plan;
Chris@10 69 fftw_complex *data;
Chris@10 70 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
Chris@10 71
Chris@10 72 MPI_Init(&amp;argc, &amp;argv);
Chris@10 73 fftw_mpi_init();
Chris@10 74
Chris@10 75 /* <span class="roman">get local data size and allocate</span> */
Chris@10 76 alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
Chris@10 77 &amp;local_n0, &amp;local_0_start);
Chris@10 78 data = fftw_alloc_complex(alloc_local);
Chris@10 79
Chris@10 80 /* <span class="roman">create plan for in-place forward DFT</span> */
Chris@10 81 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
Chris@10 82 FFTW_FORWARD, FFTW_ESTIMATE);
Chris@10 83
Chris@10 84 /* <span class="roman">initialize data to some function</span> my_function(x,y) */
Chris@10 85 for (i = 0; i &lt; local_n0; ++i) for (j = 0; j &lt; N1; ++j)
Chris@10 86 data[i*N1 + j] = my_function(local_0_start + i, j);
Chris@10 87
Chris@10 88 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */
Chris@10 89 fftw_execute(plan);
Chris@10 90
Chris@10 91 fftw_destroy_plan(plan);
Chris@10 92
Chris@10 93 MPI_Finalize();
Chris@10 94 }
Chris@10 95 </pre>
Chris@10 96 <p>As can be seen above, the MPI interface follows the same basic style
Chris@10 97 of allocate/plan/execute/destroy as the serial FFTW routines. All of
Chris@10 98 the MPI-specific routines are prefixed with &lsquo;<samp><span class="samp">fftw_mpi_</span></samp>&rsquo; instead
Chris@10 99 of &lsquo;<samp><span class="samp">fftw_</span></samp>&rsquo;. There are a few important differences, however:
Chris@10 100
Chris@10 101 <p>First, we must call <code>fftw_mpi_init()</code> after calling
Chris@10 102 <code>MPI_Init</code> (required in all MPI programs) and before calling any
Chris@10 103 other &lsquo;<samp><span class="samp">fftw_mpi_</span></samp>&rsquo; routine.
Chris@10 104 <a name="index-MPI_005fInit-357"></a><a name="index-fftw_005fmpi_005finit-358"></a>
Chris@10 105
Chris@10 106 <p>Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>,
Chris@10 107 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument:
Chris@10 108 the communicator, indicating which processes will participate in the
Chris@10 109 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes).
Chris@10 110 Whenever you create, execute, or destroy a plan for an MPI transform,
Chris@10 111 you must call the corresponding FFTW routine on <em>all</em> processes
Chris@10 112 in the communicator for that transform. (That is, these are
Chris@10 113 <em>collective</em> calls.) Note that the plan for the MPI transform
Chris@10 114 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code> routines
Chris@10 115 (on the other hand, there are MPI-specific new-array execute functions
Chris@10 116 documented below).
Chris@10 117 <a name="index-collective-function-359"></a><a name="index-fftw_005fmpi_005fplan_005fdft_005f2d-360"></a><a name="index-MPI_005fCOMM_005fWORLD-361"></a>
Chris@10 118
Chris@10 119 <p>Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments
Chris@10 120 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a
Chris@10 121 standard C integer type which is (at least) 32 bits wide on a 32-bit
Chris@10 122 machine and 64 bits wide on a 64-bit machine. This is to make it easy
Chris@10 123 to specify very large parallel transforms on a 64-bit machine. (You
Chris@10 124 can specify 64-bit transform sizes in the serial FFTW, too, but only
Chris@10 125 by using the &lsquo;<samp><span class="samp">guru64</span></samp>&rsquo; planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.)
Chris@10 126 <a name="index-ptrdiff_005ft-362"></a><a name="index-g_t64_002dbit-architecture-363"></a>
Chris@10 127
Chris@10 128 <p>Fourth, and most importantly, you don't allocate the entire
Chris@10 129 two-dimensional array on each process. Instead, you call
Chris@10 130 <code>fftw_mpi_local_size_2d</code> to find out what <em>portion</em> of the
Chris@10 131 array resides on each processor, and how much space to allocate.
Chris@10 132 Here, the portion of the array on each process is a <code>local_n0</code> by
Chris@10 133 <code>N1</code> slice of the total array, starting at index
Chris@10 134 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers
Chris@10 135 to allocate is given by the <code>alloc_local</code> return value, which
Chris@10 136 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some
Chris@10 137 intermediate calculations require additional storage). The data
Chris@10 138 distribution in FFTW's MPI interface is described in more detail by
Chris@10 139 the next section.
Chris@10 140 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d-364"></a><a name="index-data-distribution-365"></a>
Chris@10 141
Chris@10 142 <p>Given the portion of the array that resides on the local process, it
Chris@10 143 is straightforward to initialize the data (here to a function
Chris@10 144 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end
Chris@10 145 of the program you may want to output the data somehow, but
Chris@10 146 synchronizing this output is up to you and is beyond the scope of this
Chris@10 147 manual. (One good way to output a large multi-dimensional distributed
Chris@10 148 array in MPI to a portable binary file is to use the free HDF5
Chris@10 149 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.)
Chris@10 150 <a name="index-HDF5-366"></a><a name="index-MPI-I_002fO-367"></a>
Chris@10 151 <!-- -->
Chris@10 152
Chris@10 153 </body></html>
Chris@10 154