comparison src/fftw-3.3.8/doc/html/2d-MPI-example.html @ 82:d0c2a83c1364

Add FFTW 3.3.8 source, and a Linux build
author Chris Cannam
date Tue, 19 Nov 2019 14:52:55 +0000
parents
children
comparison
equal deleted inserted replaced
81:7029a4916348 82:d0c2a83c1364
1 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
2 <html>
3 <!-- This manual is for FFTW
4 (version 3.3.8, 24 May 2018).
5
6 Copyright (C) 2003 Matteo Frigo.
7
8 Copyright (C) 2003 Massachusetts Institute of Technology.
9
10 Permission is granted to make and distribute verbatim copies of this
11 manual provided the copyright notice and this permission notice are
12 preserved on all copies.
13
14 Permission is granted to copy and distribute modified versions of this
15 manual under the conditions for verbatim copying, provided that the
16 entire resulting derived work is distributed under the terms of a
17 permission notice identical to this one.
18
19 Permission is granted to copy and distribute translations of this manual
20 into another language, under the above conditions for modified versions,
21 except that this permission notice may be stated in a translation
22 approved by the Free Software Foundation. -->
23 <!-- Created by GNU Texinfo 6.3, http://www.gnu.org/software/texinfo/ -->
24 <head>
25 <title>FFTW 3.3.8: 2d MPI example</title>
26
27 <meta name="description" content="FFTW 3.3.8: 2d MPI example">
28 <meta name="keywords" content="FFTW 3.3.8: 2d MPI example">
29 <meta name="resource-type" content="document">
30 <meta name="distribution" content="global">
31 <meta name="Generator" content="makeinfo">
32 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
33 <link href="index.html#Top" rel="start" title="Top">
34 <link href="Concept-Index.html#Concept-Index" rel="index" title="Concept Index">
35 <link href="index.html#SEC_Contents" rel="contents" title="Table of Contents">
36 <link href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" rel="up" title="Distributed-memory FFTW with MPI">
37 <link href="MPI-Data-Distribution.html#MPI-Data-Distribution" rel="next" title="MPI Data Distribution">
38 <link href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" rel="prev" title="Linking and Initializing MPI FFTW">
39 <style type="text/css">
40 <!--
41 a.summary-letter {text-decoration: none}
42 blockquote.indentedblock {margin-right: 0em}
43 blockquote.smallindentedblock {margin-right: 0em; font-size: smaller}
44 blockquote.smallquotation {font-size: smaller}
45 div.display {margin-left: 3.2em}
46 div.example {margin-left: 3.2em}
47 div.lisp {margin-left: 3.2em}
48 div.smalldisplay {margin-left: 3.2em}
49 div.smallexample {margin-left: 3.2em}
50 div.smalllisp {margin-left: 3.2em}
51 kbd {font-style: oblique}
52 pre.display {font-family: inherit}
53 pre.format {font-family: inherit}
54 pre.menu-comment {font-family: serif}
55 pre.menu-preformatted {font-family: serif}
56 pre.smalldisplay {font-family: inherit; font-size: smaller}
57 pre.smallexample {font-size: smaller}
58 pre.smallformat {font-family: inherit; font-size: smaller}
59 pre.smalllisp {font-size: smaller}
60 span.nolinebreak {white-space: nowrap}
61 span.roman {font-family: initial; font-weight: normal}
62 span.sansserif {font-family: sans-serif; font-weight: normal}
63 ul.no-bullet {list-style: none}
64 -->
65 </style>
66
67
68 </head>
69
70 <body lang="en">
71 <a name="g_t2d-MPI-example"></a>
72 <div class="header">
73 <p>
74 Next: <a href="MPI-Data-Distribution.html#MPI-Data-Distribution" accesskey="n" rel="next">MPI Data Distribution</a>, Previous: <a href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" accesskey="p" rel="prev">Linking and Initializing MPI FFTW</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p>
75 </div>
76 <hr>
77 <a name="g_t2d-MPI-example-1"></a>
78 <h3 class="section">6.3 2d MPI example</h3>
79
80 <p>Before we document the FFTW MPI interface in detail, we begin with a
81 simple example outlining how one would perform a two-dimensional
82 <code>N0</code> by <code>N1</code> complex DFT.
83 </p>
84 <div class="example">
85 <pre class="example">#include &lt;fftw3-mpi.h&gt;
86
87 int main(int argc, char **argv)
88 {
89 const ptrdiff_t N0 = ..., N1 = ...;
90 fftw_plan plan;
91 fftw_complex *data;
92 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
93
94 MPI_Init(&amp;argc, &amp;argv);
95 fftw_mpi_init();
96
97 /* <span class="roman">get local data size and allocate</span> */
98 alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
99 &amp;local_n0, &amp;local_0_start);
100 data = fftw_alloc_complex(alloc_local);
101
102 /* <span class="roman">create plan for in-place forward DFT</span> */
103 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
104 FFTW_FORWARD, FFTW_ESTIMATE);
105
106 /* <span class="roman">initialize data to some function</span> my_function(x,y) */
107 for (i = 0; i &lt; local_n0; ++i) for (j = 0; j &lt; N1; ++j)
108 data[i*N1 + j] = my_function(local_0_start + i, j);
109
110 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */
111 fftw_execute(plan);
112
113 fftw_destroy_plan(plan);
114
115 MPI_Finalize();
116 }
117 </pre></div>
118
119 <p>As can be seen above, the MPI interface follows the same basic style
120 of allocate/plan/execute/destroy as the serial FFTW routines. All of
121 the MPI-specific routines are prefixed with &lsquo;<samp>fftw_mpi_</samp>&rsquo; instead
122 of &lsquo;<samp>fftw_</samp>&rsquo;. There are a few important differences, however:
123 </p>
124 <p>First, we must call <code>fftw_mpi_init()</code> after calling
125 <code>MPI_Init</code> (required in all MPI programs) and before calling any
126 other &lsquo;<samp>fftw_mpi_</samp>&rsquo; routine.
127 <a name="index-MPI_005fInit"></a>
128 <a name="index-fftw_005fmpi_005finit-1"></a>
129 </p>
130
131 <p>Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>,
132 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument:
133 the communicator, indicating which processes will participate in the
134 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes).
135 Whenever you create, execute, or destroy a plan for an MPI transform,
136 you must call the corresponding FFTW routine on <em>all</em> processes
137 in the communicator for that transform. (That is, these are
138 <em>collective</em> calls.) Note that the plan for the MPI transform
139 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code> routines
140 (on the other hand, there are MPI-specific new-array execute functions
141 documented below).
142 <a name="index-collective-function"></a>
143 <a name="index-fftw_005fmpi_005fplan_005fdft_005f2d"></a>
144 <a name="index-MPI_005fCOMM_005fWORLD-1"></a>
145 </p>
146
147 <p>Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments
148 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a
149 standard C integer type which is (at least) 32 bits wide on a 32-bit
150 machine and 64 bits wide on a 64-bit machine. This is to make it easy
151 to specify very large parallel transforms on a 64-bit machine. (You
152 can specify 64-bit transform sizes in the serial FFTW, too, but only
153 by using the &lsquo;<samp>guru64</samp>&rsquo; planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.)
154 <a name="index-ptrdiff_005ft-1"></a>
155 <a name="index-64_002dbit-architecture-1"></a>
156 </p>
157
158 <p>Fourth, and most importantly, you don&rsquo;t allocate the entire
159 two-dimensional array on each process. Instead, you call
160 <code>fftw_mpi_local_size_2d</code> to find out what <em>portion</em> of the
161 array resides on each processor, and how much space to allocate.
162 Here, the portion of the array on each process is a <code>local_n0</code> by
163 <code>N1</code> slice of the total array, starting at index
164 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers
165 to allocate is given by the <code>alloc_local</code> return value, which
166 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some
167 intermediate calculations require additional storage). The data
168 distribution in FFTW&rsquo;s MPI interface is described in more detail by
169 the next section.
170 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d"></a>
171 <a name="index-data-distribution-1"></a>
172 </p>
173
174 <p>Given the portion of the array that resides on the local process, it
175 is straightforward to initialize the data (here to a function
176 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end
177 of the program you may want to output the data somehow, but
178 synchronizing this output is up to you and is beyond the scope of this
179 manual. (One good way to output a large multi-dimensional distributed
180 array in MPI to a portable binary file is to use the free HDF5
181 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.)
182 <a name="index-HDF5"></a>
183 <a name="index-MPI-I_002fO"></a>
184 </p>
185 <hr>
186 <div class="header">
187 <p>
188 Next: <a href="MPI-Data-Distribution.html#MPI-Data-Distribution" accesskey="n" rel="next">MPI Data Distribution</a>, Previous: <a href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" accesskey="p" rel="prev">Linking and Initializing MPI FFTW</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p>
189 </div>
190
191
192
193 </body>
194 </html>