annotate src/fftw-3.3.8/doc/html/2d-MPI-example.html @ 83:ae30d91d2ffe

Replace these with versions built using an older toolset (so as to avoid ABI compatibilities when linking on Ubuntu 14.04 for packaging purposes)
author Chris Cannam
date Fri, 07 Feb 2020 11:51:13 +0000
parents d0c2a83c1364
children
rev   line source
Chris@82 1 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
Chris@82 2 <html>
Chris@82 3 <!-- This manual is for FFTW
Chris@82 4 (version 3.3.8, 24 May 2018).
Chris@82 5
Chris@82 6 Copyright (C) 2003 Matteo Frigo.
Chris@82 7
Chris@82 8 Copyright (C) 2003 Massachusetts Institute of Technology.
Chris@82 9
Chris@82 10 Permission is granted to make and distribute verbatim copies of this
Chris@82 11 manual provided the copyright notice and this permission notice are
Chris@82 12 preserved on all copies.
Chris@82 13
Chris@82 14 Permission is granted to copy and distribute modified versions of this
Chris@82 15 manual under the conditions for verbatim copying, provided that the
Chris@82 16 entire resulting derived work is distributed under the terms of a
Chris@82 17 permission notice identical to this one.
Chris@82 18
Chris@82 19 Permission is granted to copy and distribute translations of this manual
Chris@82 20 into another language, under the above conditions for modified versions,
Chris@82 21 except that this permission notice may be stated in a translation
Chris@82 22 approved by the Free Software Foundation. -->
Chris@82 23 <!-- Created by GNU Texinfo 6.3, http://www.gnu.org/software/texinfo/ -->
Chris@82 24 <head>
Chris@82 25 <title>FFTW 3.3.8: 2d MPI example</title>
Chris@82 26
Chris@82 27 <meta name="description" content="FFTW 3.3.8: 2d MPI example">
Chris@82 28 <meta name="keywords" content="FFTW 3.3.8: 2d MPI example">
Chris@82 29 <meta name="resource-type" content="document">
Chris@82 30 <meta name="distribution" content="global">
Chris@82 31 <meta name="Generator" content="makeinfo">
Chris@82 32 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Chris@82 33 <link href="index.html#Top" rel="start" title="Top">
Chris@82 34 <link href="Concept-Index.html#Concept-Index" rel="index" title="Concept Index">
Chris@82 35 <link href="index.html#SEC_Contents" rel="contents" title="Table of Contents">
Chris@82 36 <link href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" rel="up" title="Distributed-memory FFTW with MPI">
Chris@82 37 <link href="MPI-Data-Distribution.html#MPI-Data-Distribution" rel="next" title="MPI Data Distribution">
Chris@82 38 <link href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" rel="prev" title="Linking and Initializing MPI FFTW">
Chris@82 39 <style type="text/css">
Chris@82 40 <!--
Chris@82 41 a.summary-letter {text-decoration: none}
Chris@82 42 blockquote.indentedblock {margin-right: 0em}
Chris@82 43 blockquote.smallindentedblock {margin-right: 0em; font-size: smaller}
Chris@82 44 blockquote.smallquotation {font-size: smaller}
Chris@82 45 div.display {margin-left: 3.2em}
Chris@82 46 div.example {margin-left: 3.2em}
Chris@82 47 div.lisp {margin-left: 3.2em}
Chris@82 48 div.smalldisplay {margin-left: 3.2em}
Chris@82 49 div.smallexample {margin-left: 3.2em}
Chris@82 50 div.smalllisp {margin-left: 3.2em}
Chris@82 51 kbd {font-style: oblique}
Chris@82 52 pre.display {font-family: inherit}
Chris@82 53 pre.format {font-family: inherit}
Chris@82 54 pre.menu-comment {font-family: serif}
Chris@82 55 pre.menu-preformatted {font-family: serif}
Chris@82 56 pre.smalldisplay {font-family: inherit; font-size: smaller}
Chris@82 57 pre.smallexample {font-size: smaller}
Chris@82 58 pre.smallformat {font-family: inherit; font-size: smaller}
Chris@82 59 pre.smalllisp {font-size: smaller}
Chris@82 60 span.nolinebreak {white-space: nowrap}
Chris@82 61 span.roman {font-family: initial; font-weight: normal}
Chris@82 62 span.sansserif {font-family: sans-serif; font-weight: normal}
Chris@82 63 ul.no-bullet {list-style: none}
Chris@82 64 -->
Chris@82 65 </style>
Chris@82 66
Chris@82 67
Chris@82 68 </head>
Chris@82 69
Chris@82 70 <body lang="en">
Chris@82 71 <a name="g_t2d-MPI-example"></a>
Chris@82 72 <div class="header">
Chris@82 73 <p>
Chris@82 74 Next: <a href="MPI-Data-Distribution.html#MPI-Data-Distribution" accesskey="n" rel="next">MPI Data Distribution</a>, Previous: <a href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" accesskey="p" rel="prev">Linking and Initializing MPI FFTW</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p>
Chris@82 75 </div>
Chris@82 76 <hr>
Chris@82 77 <a name="g_t2d-MPI-example-1"></a>
Chris@82 78 <h3 class="section">6.3 2d MPI example</h3>
Chris@82 79
Chris@82 80 <p>Before we document the FFTW MPI interface in detail, we begin with a
Chris@82 81 simple example outlining how one would perform a two-dimensional
Chris@82 82 <code>N0</code> by <code>N1</code> complex DFT.
Chris@82 83 </p>
Chris@82 84 <div class="example">
Chris@82 85 <pre class="example">#include &lt;fftw3-mpi.h&gt;
Chris@82 86
Chris@82 87 int main(int argc, char **argv)
Chris@82 88 {
Chris@82 89 const ptrdiff_t N0 = ..., N1 = ...;
Chris@82 90 fftw_plan plan;
Chris@82 91 fftw_complex *data;
Chris@82 92 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
Chris@82 93
Chris@82 94 MPI_Init(&amp;argc, &amp;argv);
Chris@82 95 fftw_mpi_init();
Chris@82 96
Chris@82 97 /* <span class="roman">get local data size and allocate</span> */
Chris@82 98 alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
Chris@82 99 &amp;local_n0, &amp;local_0_start);
Chris@82 100 data = fftw_alloc_complex(alloc_local);
Chris@82 101
Chris@82 102 /* <span class="roman">create plan for in-place forward DFT</span> */
Chris@82 103 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
Chris@82 104 FFTW_FORWARD, FFTW_ESTIMATE);
Chris@82 105
Chris@82 106 /* <span class="roman">initialize data to some function</span> my_function(x,y) */
Chris@82 107 for (i = 0; i &lt; local_n0; ++i) for (j = 0; j &lt; N1; ++j)
Chris@82 108 data[i*N1 + j] = my_function(local_0_start + i, j);
Chris@82 109
Chris@82 110 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */
Chris@82 111 fftw_execute(plan);
Chris@82 112
Chris@82 113 fftw_destroy_plan(plan);
Chris@82 114
Chris@82 115 MPI_Finalize();
Chris@82 116 }
Chris@82 117 </pre></div>
Chris@82 118
Chris@82 119 <p>As can be seen above, the MPI interface follows the same basic style
Chris@82 120 of allocate/plan/execute/destroy as the serial FFTW routines. All of
Chris@82 121 the MPI-specific routines are prefixed with &lsquo;<samp>fftw_mpi_</samp>&rsquo; instead
Chris@82 122 of &lsquo;<samp>fftw_</samp>&rsquo;. There are a few important differences, however:
Chris@82 123 </p>
Chris@82 124 <p>First, we must call <code>fftw_mpi_init()</code> after calling
Chris@82 125 <code>MPI_Init</code> (required in all MPI programs) and before calling any
Chris@82 126 other &lsquo;<samp>fftw_mpi_</samp>&rsquo; routine.
Chris@82 127 <a name="index-MPI_005fInit"></a>
Chris@82 128 <a name="index-fftw_005fmpi_005finit-1"></a>
Chris@82 129 </p>
Chris@82 130
Chris@82 131 <p>Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>,
Chris@82 132 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument:
Chris@82 133 the communicator, indicating which processes will participate in the
Chris@82 134 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes).
Chris@82 135 Whenever you create, execute, or destroy a plan for an MPI transform,
Chris@82 136 you must call the corresponding FFTW routine on <em>all</em> processes
Chris@82 137 in the communicator for that transform. (That is, these are
Chris@82 138 <em>collective</em> calls.) Note that the plan for the MPI transform
Chris@82 139 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code> routines
Chris@82 140 (on the other hand, there are MPI-specific new-array execute functions
Chris@82 141 documented below).
Chris@82 142 <a name="index-collective-function"></a>
Chris@82 143 <a name="index-fftw_005fmpi_005fplan_005fdft_005f2d"></a>
Chris@82 144 <a name="index-MPI_005fCOMM_005fWORLD-1"></a>
Chris@82 145 </p>
Chris@82 146
Chris@82 147 <p>Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments
Chris@82 148 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a
Chris@82 149 standard C integer type which is (at least) 32 bits wide on a 32-bit
Chris@82 150 machine and 64 bits wide on a 64-bit machine. This is to make it easy
Chris@82 151 to specify very large parallel transforms on a 64-bit machine. (You
Chris@82 152 can specify 64-bit transform sizes in the serial FFTW, too, but only
Chris@82 153 by using the &lsquo;<samp>guru64</samp>&rsquo; planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.)
Chris@82 154 <a name="index-ptrdiff_005ft-1"></a>
Chris@82 155 <a name="index-64_002dbit-architecture-1"></a>
Chris@82 156 </p>
Chris@82 157
Chris@82 158 <p>Fourth, and most importantly, you don&rsquo;t allocate the entire
Chris@82 159 two-dimensional array on each process. Instead, you call
Chris@82 160 <code>fftw_mpi_local_size_2d</code> to find out what <em>portion</em> of the
Chris@82 161 array resides on each processor, and how much space to allocate.
Chris@82 162 Here, the portion of the array on each process is a <code>local_n0</code> by
Chris@82 163 <code>N1</code> slice of the total array, starting at index
Chris@82 164 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers
Chris@82 165 to allocate is given by the <code>alloc_local</code> return value, which
Chris@82 166 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some
Chris@82 167 intermediate calculations require additional storage). The data
Chris@82 168 distribution in FFTW&rsquo;s MPI interface is described in more detail by
Chris@82 169 the next section.
Chris@82 170 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d"></a>
Chris@82 171 <a name="index-data-distribution-1"></a>
Chris@82 172 </p>
Chris@82 173
Chris@82 174 <p>Given the portion of the array that resides on the local process, it
Chris@82 175 is straightforward to initialize the data (here to a function
Chris@82 176 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end
Chris@82 177 of the program you may want to output the data somehow, but
Chris@82 178 synchronizing this output is up to you and is beyond the scope of this
Chris@82 179 manual. (One good way to output a large multi-dimensional distributed
Chris@82 180 array in MPI to a portable binary file is to use the free HDF5
Chris@82 181 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.)
Chris@82 182 <a name="index-HDF5"></a>
Chris@82 183 <a name="index-MPI-I_002fO"></a>
Chris@82 184 </p>
Chris@82 185 <hr>
Chris@82 186 <div class="header">
Chris@82 187 <p>
Chris@82 188 Next: <a href="MPI-Data-Distribution.html#MPI-Data-Distribution" accesskey="n" rel="next">MPI Data Distribution</a>, Previous: <a href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" accesskey="p" rel="prev">Linking and Initializing MPI FFTW</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p>
Chris@82 189 </div>
Chris@82 190
Chris@82 191
Chris@82 192
Chris@82 193 </body>
Chris@82 194 </html>