comparison Lib/fftw-3.2.1/doc/html/Simple-MPI-example.html @ 15:585caf503ef5 tip

Tidy up for ROLI
author Geogaddi\David <d.m.ronan@qmul.ac.uk>
date Tue, 17 May 2016 18:50:19 +0100
parents 636c989477e7
children
comparison
equal deleted inserted replaced
14:636c989477e7 15:585caf503ef5
1 <html lang="en">
2 <head>
3 <title>Simple MPI example - FFTW 3.2alpha3</title>
4 <meta http-equiv="Content-Type" content="text/html">
5 <meta name="description" content="FFTW 3.2alpha3">
6 <meta name="generator" content="makeinfo 4.8">
7 <link title="Top" rel="start" href="index.html#Top">
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
9 <link rel="prev" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" title="Linking and Initializing MPI FFTW">
10 <link rel="next" href="MPI-data-distribution.html#MPI-data-distribution" title="MPI data distribution">
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
12 <!--
13 This manual is for FFTW
14 (version 3.2alpha3, 14 August 2007).
15
16 Copyright (C) 2003 Matteo Frigo.
17
18 Copyright (C) 2003 Massachusetts Institute of Technology.
19
20 Permission is granted to make and distribute verbatim copies of
21 this manual provided the copyright notice and this permission
22 notice are preserved on all copies.
23
24 Permission is granted to copy and distribute modified versions of
25 this manual under the conditions for verbatim copying, provided
26 that the entire resulting derived work is distributed under the
27 terms of a permission notice identical to this one.
28
29 Permission is granted to copy and distribute translations of this
30 manual into another language, under the above conditions for
31 modified versions, except that this permission notice may be
32 stated in a translation approved by the Free Software Foundation.
33 -->
34 <meta http-equiv="Content-Style-Type" content="text/css">
35 <style type="text/css"><!--
36 pre.display { font-family:inherit }
37 pre.format { font-family:inherit }
38 pre.smalldisplay { font-family:inherit; font-size:smaller }
39 pre.smallformat { font-family:inherit; font-size:smaller }
40 pre.smallexample { font-size:smaller }
41 pre.smalllisp { font-size:smaller }
42 span.sc { font-variant:small-caps }
43 span.roman { font-family:serif; font-weight:normal; }
44 span.sansserif { font-family:sans-serif; font-weight:normal; }
45 --></style>
46 </head>
47 <body>
48 <div class="node">
49 <p>
50 <a name="Simple-MPI-example"></a>
51 Next:&nbsp;<a rel="next" accesskey="n" href="MPI-data-distribution.html#MPI-data-distribution">MPI data distribution</a>,
52 Previous:&nbsp;<a rel="previous" accesskey="p" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW">Linking and Initializing MPI FFTW</a>,
53 Up:&nbsp;<a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
54 <hr>
55 </div>
56
57 <h3 class="section">6.3 Simple MPI example</h3>
58
59 <p>Before we document the FFTW MPI interface in detail, we begin with a
60 simple example outlining how one would perform a two-dimensional
61 <code>N0</code> by <code>N1</code> complex DFT.
62
63 <pre class="example"> #include &lt;fftw3-mpi.h&gt;
64
65 int main(int argc, char **argv)
66 {
67 const ptrdiff_t N0 = ..., N1 = ...;
68 fftw_plan plan;
69 fftw_complex *data;
70 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
71
72 MPI_Init(&amp;argc, &amp;argv);
73 fftw_mpi_init();
74
75 /* <span class="roman">get local data size and allocate</span> */
76 alloc_local = fftw3_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
77 &amp;local_n0, &amp;local_0_start);
78 data = (fftw_complex *) fftw_malloc(sizeof(fftw_complex) * alloc_local);
79
80 /* <span class="roman">create plan for forward DFT</span> */
81 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
82 FFTW_FORWARD, FFTW_ESTIMATE);
83
84 /* <span class="roman">initialize data to some function</span> my_function(x,y) */
85 for (i = 0; i &lt; local_n0; ++i) for (j = 0; j &lt; N1; ++j)
86 data[i*N1 + j] = my_function(local_0_start + i, j);
87
88 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */
89 fftw_execute(plan);
90
91 fftw_destroy_plan(plan);
92
93 MPI_Finalize();
94 }
95 </pre>
96 <p>As can be seen above, the MPI interface follows the same basic style
97 of allocate/plan/execute/destroy as the serial FFTW routines. All of
98 the MPI-specific routines are prefixed with `<samp><span class="samp">fftw_mpi_</span></samp>' instead
99 of `<samp><span class="samp">fftw_</span></samp>'. There are a few important differences, however:
100
101 <p>First, we must call <code>fftw_mpi_init()</code> after calling
102 <code>MPI_Init</code> (required in all MPI programs) and before calling any
103 other `<samp><span class="samp">fftw_mpi_</span></samp>' routine.
104 <a name="index-MPI_005fInit-340"></a><a name="index-fftw_005fmpi_005finit-341"></a>
105 Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>,
106 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument:
107 the communicator, indicating which processes will participate in the
108 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes).
109 Whenever you create, execute, or destroy a plan for an MPI transform,
110 you must call the corresponding FFTW routine on <em>all</em> processes
111 in the communicator for that transform. (That is, these are
112 <em>collective</em> calls.) Note that the plan for the MPI transform
113 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code>
114 routines (the new-array execute routines also work).
115 <a name="index-collective-function-342"></a><a name="index-fftw_005fmpi_005fplan_005fdft_005f2d-343"></a><a name="index-MPI_005fCOMM_005fWORLD-344"></a>
116 Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments
117 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a
118 standard C integer type which is (at least) 32 bits wide on a 32-bit
119 machine and 64 bits wide on a 64-bit machine. This is to make it easy
120 to specify very large parallel transforms on a 64-bit machine. (You
121 can specify 64-bit transform sizes in the serial FFTW, too, but only
122 by using the `<samp><span class="samp">guru64</span></samp>' planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.)
123 <a name="index-ptrdiff_005ft-345"></a><a name="index-g_t64_002dbit-architecture-346"></a>
124 Fourth, and most importantly, you don't allocate the entire
125 two-dimensional array on each process. Instead, you call
126 <code>fftw_mpi_local_size_2d</code> to find out what <code>portion</code> of the
127 array resides on each processor, and how much space to allocate.
128 Here, the portion of the array on each process is a <code>local_n0</code> by
129 <code>N1</code> slice of the total array, starting at index
130 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers
131 to allocate is given by the <code>alloc_local</code> return value, which
132 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some
133 intermediate calculations require additional storage). The data
134 distribution in FFTW's MPI interface is described in more detail by
135 the next section.
136 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d-347"></a><a name="index-data-distribution-348"></a>
137 Given the portion of the array that resides on the local process, it
138 is straightforward to initialize the data (here to a function
139 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end
140 of the program you may want to output the data somehow, but
141 synchronizing this output is up to you and is beyond the scope of this
142 manual. (One good way to output a large multi-dimensional distributed
143 array in MPI to a portable binary file is to use the free HDF5
144 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.)
145 <a name="index-HDF5-349"></a>
146 <!-- -->
147
148 </body></html>
149