d@0
|
1 <html lang="en">
|
d@0
|
2 <head>
|
d@0
|
3 <title>Simple MPI example - FFTW 3.2alpha3</title>
|
d@0
|
4 <meta http-equiv="Content-Type" content="text/html">
|
d@0
|
5 <meta name="description" content="FFTW 3.2alpha3">
|
d@0
|
6 <meta name="generator" content="makeinfo 4.8">
|
d@0
|
7 <link title="Top" rel="start" href="index.html#Top">
|
d@0
|
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
|
d@0
|
9 <link rel="prev" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" title="Linking and Initializing MPI FFTW">
|
d@0
|
10 <link rel="next" href="MPI-data-distribution.html#MPI-data-distribution" title="MPI data distribution">
|
d@0
|
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
|
d@0
|
12 <!--
|
d@0
|
13 This manual is for FFTW
|
d@0
|
14 (version 3.2alpha3, 14 August 2007).
|
d@0
|
15
|
d@0
|
16 Copyright (C) 2003 Matteo Frigo.
|
d@0
|
17
|
d@0
|
18 Copyright (C) 2003 Massachusetts Institute of Technology.
|
d@0
|
19
|
d@0
|
20 Permission is granted to make and distribute verbatim copies of
|
d@0
|
21 this manual provided the copyright notice and this permission
|
d@0
|
22 notice are preserved on all copies.
|
d@0
|
23
|
d@0
|
24 Permission is granted to copy and distribute modified versions of
|
d@0
|
25 this manual under the conditions for verbatim copying, provided
|
d@0
|
26 that the entire resulting derived work is distributed under the
|
d@0
|
27 terms of a permission notice identical to this one.
|
d@0
|
28
|
d@0
|
29 Permission is granted to copy and distribute translations of this
|
d@0
|
30 manual into another language, under the above conditions for
|
d@0
|
31 modified versions, except that this permission notice may be
|
d@0
|
32 stated in a translation approved by the Free Software Foundation.
|
d@0
|
33 -->
|
d@0
|
34 <meta http-equiv="Content-Style-Type" content="text/css">
|
d@0
|
35 <style type="text/css"><!--
|
d@0
|
36 pre.display { font-family:inherit }
|
d@0
|
37 pre.format { font-family:inherit }
|
d@0
|
38 pre.smalldisplay { font-family:inherit; font-size:smaller }
|
d@0
|
39 pre.smallformat { font-family:inherit; font-size:smaller }
|
d@0
|
40 pre.smallexample { font-size:smaller }
|
d@0
|
41 pre.smalllisp { font-size:smaller }
|
d@0
|
42 span.sc { font-variant:small-caps }
|
d@0
|
43 span.roman { font-family:serif; font-weight:normal; }
|
d@0
|
44 span.sansserif { font-family:sans-serif; font-weight:normal; }
|
d@0
|
45 --></style>
|
d@0
|
46 </head>
|
d@0
|
47 <body>
|
d@0
|
48 <div class="node">
|
d@0
|
49 <p>
|
d@0
|
50 <a name="Simple-MPI-example"></a>
|
d@0
|
51 Next: <a rel="next" accesskey="n" href="MPI-data-distribution.html#MPI-data-distribution">MPI data distribution</a>,
|
d@0
|
52 Previous: <a rel="previous" accesskey="p" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW">Linking and Initializing MPI FFTW</a>,
|
d@0
|
53 Up: <a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
|
d@0
|
54 <hr>
|
d@0
|
55 </div>
|
d@0
|
56
|
d@0
|
57 <h3 class="section">6.3 Simple MPI example</h3>
|
d@0
|
58
|
d@0
|
59 <p>Before we document the FFTW MPI interface in detail, we begin with a
|
d@0
|
60 simple example outlining how one would perform a two-dimensional
|
d@0
|
61 <code>N0</code> by <code>N1</code> complex DFT.
|
d@0
|
62
|
d@0
|
63 <pre class="example"> #include <fftw3-mpi.h>
|
d@0
|
64
|
d@0
|
65 int main(int argc, char **argv)
|
d@0
|
66 {
|
d@0
|
67 const ptrdiff_t N0 = ..., N1 = ...;
|
d@0
|
68 fftw_plan plan;
|
d@0
|
69 fftw_complex *data;
|
d@0
|
70 ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
|
d@0
|
71
|
d@0
|
72 MPI_Init(&argc, &argv);
|
d@0
|
73 fftw_mpi_init();
|
d@0
|
74
|
d@0
|
75 /* <span class="roman">get local data size and allocate</span> */
|
d@0
|
76 alloc_local = fftw3_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
|
d@0
|
77 &local_n0, &local_0_start);
|
d@0
|
78 data = (fftw_complex *) fftw_malloc(sizeof(fftw_complex) * alloc_local);
|
d@0
|
79
|
d@0
|
80 /* <span class="roman">create plan for forward DFT</span> */
|
d@0
|
81 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
|
d@0
|
82 FFTW_FORWARD, FFTW_ESTIMATE);
|
d@0
|
83
|
d@0
|
84 /* <span class="roman">initialize data to some function</span> my_function(x,y) */
|
d@0
|
85 for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j)
|
d@0
|
86 data[i*N1 + j] = my_function(local_0_start + i, j);
|
d@0
|
87
|
d@0
|
88 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */
|
d@0
|
89 fftw_execute(plan);
|
d@0
|
90
|
d@0
|
91 fftw_destroy_plan(plan);
|
d@0
|
92
|
d@0
|
93 MPI_Finalize();
|
d@0
|
94 }
|
d@0
|
95 </pre>
|
d@0
|
96 <p>As can be seen above, the MPI interface follows the same basic style
|
d@0
|
97 of allocate/plan/execute/destroy as the serial FFTW routines. All of
|
d@0
|
98 the MPI-specific routines are prefixed with `<samp><span class="samp">fftw_mpi_</span></samp>' instead
|
d@0
|
99 of `<samp><span class="samp">fftw_</span></samp>'. There are a few important differences, however:
|
d@0
|
100
|
d@0
|
101 <p>First, we must call <code>fftw_mpi_init()</code> after calling
|
d@0
|
102 <code>MPI_Init</code> (required in all MPI programs) and before calling any
|
d@0
|
103 other `<samp><span class="samp">fftw_mpi_</span></samp>' routine.
|
d@0
|
104 <a name="index-MPI_005fInit-340"></a><a name="index-fftw_005fmpi_005finit-341"></a>
|
d@0
|
105 Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>,
|
d@0
|
106 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument:
|
d@0
|
107 the communicator, indicating which processes will participate in the
|
d@0
|
108 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes).
|
d@0
|
109 Whenever you create, execute, or destroy a plan for an MPI transform,
|
d@0
|
110 you must call the corresponding FFTW routine on <em>all</em> processes
|
d@0
|
111 in the communicator for that transform. (That is, these are
|
d@0
|
112 <em>collective</em> calls.) Note that the plan for the MPI transform
|
d@0
|
113 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code>
|
d@0
|
114 routines (the new-array execute routines also work).
|
d@0
|
115 <a name="index-collective-function-342"></a><a name="index-fftw_005fmpi_005fplan_005fdft_005f2d-343"></a><a name="index-MPI_005fCOMM_005fWORLD-344"></a>
|
d@0
|
116 Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments
|
d@0
|
117 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a
|
d@0
|
118 standard C integer type which is (at least) 32 bits wide on a 32-bit
|
d@0
|
119 machine and 64 bits wide on a 64-bit machine. This is to make it easy
|
d@0
|
120 to specify very large parallel transforms on a 64-bit machine. (You
|
d@0
|
121 can specify 64-bit transform sizes in the serial FFTW, too, but only
|
d@0
|
122 by using the `<samp><span class="samp">guru64</span></samp>' planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.)
|
d@0
|
123 <a name="index-ptrdiff_005ft-345"></a><a name="index-g_t64_002dbit-architecture-346"></a>
|
d@0
|
124 Fourth, and most importantly, you don't allocate the entire
|
d@0
|
125 two-dimensional array on each process. Instead, you call
|
d@0
|
126 <code>fftw_mpi_local_size_2d</code> to find out what <code>portion</code> of the
|
d@0
|
127 array resides on each processor, and how much space to allocate.
|
d@0
|
128 Here, the portion of the array on each process is a <code>local_n0</code> by
|
d@0
|
129 <code>N1</code> slice of the total array, starting at index
|
d@0
|
130 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers
|
d@0
|
131 to allocate is given by the <code>alloc_local</code> return value, which
|
d@0
|
132 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some
|
d@0
|
133 intermediate calculations require additional storage). The data
|
d@0
|
134 distribution in FFTW's MPI interface is described in more detail by
|
d@0
|
135 the next section.
|
d@0
|
136 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d-347"></a><a name="index-data-distribution-348"></a>
|
d@0
|
137 Given the portion of the array that resides on the local process, it
|
d@0
|
138 is straightforward to initialize the data (here to a function
|
d@0
|
139 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end
|
d@0
|
140 of the program you may want to output the data somehow, but
|
d@0
|
141 synchronizing this output is up to you and is beyond the scope of this
|
d@0
|
142 manual. (One good way to output a large multi-dimensional distributed
|
d@0
|
143 array in MPI to a portable binary file is to use the free HDF5
|
d@0
|
144 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.)
|
d@0
|
145 <a name="index-HDF5-349"></a>
|
d@0
|
146 <!-- -->
|
d@0
|
147
|
d@0
|
148 </body></html>
|
d@0
|
149
|