Mercurial > hg > sv-dependency-builds
comparison src/fftw-3.3.3/doc/html/2d-MPI-example.html @ 95:89f5e221ed7b
Add FFTW3
author | Chris Cannam <cannam@all-day-breakfast.com> |
---|---|
date | Wed, 20 Mar 2013 15:35:50 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
94:d278df1123f9 | 95:89f5e221ed7b |
---|---|
1 <html lang="en"> | |
2 <head> | |
3 <title>2d MPI example - FFTW 3.3.3</title> | |
4 <meta http-equiv="Content-Type" content="text/html"> | |
5 <meta name="description" content="FFTW 3.3.3"> | |
6 <meta name="generator" content="makeinfo 4.13"> | |
7 <link title="Top" rel="start" href="index.html#Top"> | |
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI"> | |
9 <link rel="prev" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW" title="Linking and Initializing MPI FFTW"> | |
10 <link rel="next" href="MPI-Data-Distribution.html#MPI-Data-Distribution" title="MPI Data Distribution"> | |
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage"> | |
12 <!-- | |
13 This manual is for FFTW | |
14 (version 3.3.3, 25 November 2012). | |
15 | |
16 Copyright (C) 2003 Matteo Frigo. | |
17 | |
18 Copyright (C) 2003 Massachusetts Institute of Technology. | |
19 | |
20 Permission is granted to make and distribute verbatim copies of | |
21 this manual provided the copyright notice and this permission | |
22 notice are preserved on all copies. | |
23 | |
24 Permission is granted to copy and distribute modified versions of | |
25 this manual under the conditions for verbatim copying, provided | |
26 that the entire resulting derived work is distributed under the | |
27 terms of a permission notice identical to this one. | |
28 | |
29 Permission is granted to copy and distribute translations of this | |
30 manual into another language, under the above conditions for | |
31 modified versions, except that this permission notice may be | |
32 stated in a translation approved by the Free Software Foundation. | |
33 --> | |
34 <meta http-equiv="Content-Style-Type" content="text/css"> | |
35 <style type="text/css"><!-- | |
36 pre.display { font-family:inherit } | |
37 pre.format { font-family:inherit } | |
38 pre.smalldisplay { font-family:inherit; font-size:smaller } | |
39 pre.smallformat { font-family:inherit; font-size:smaller } | |
40 pre.smallexample { font-size:smaller } | |
41 pre.smalllisp { font-size:smaller } | |
42 span.sc { font-variant:small-caps } | |
43 span.roman { font-family:serif; font-weight:normal; } | |
44 span.sansserif { font-family:sans-serif; font-weight:normal; } | |
45 --></style> | |
46 </head> | |
47 <body> | |
48 <div class="node"> | |
49 <a name="g_t2d-MPI-example"></a> | |
50 <p> | |
51 Next: <a rel="next" accesskey="n" href="MPI-Data-Distribution.html#MPI-Data-Distribution">MPI Data Distribution</a>, | |
52 Previous: <a rel="previous" accesskey="p" href="Linking-and-Initializing-MPI-FFTW.html#Linking-and-Initializing-MPI-FFTW">Linking and Initializing MPI FFTW</a>, | |
53 Up: <a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a> | |
54 <hr> | |
55 </div> | |
56 | |
57 <h3 class="section">6.3 2d MPI example</h3> | |
58 | |
59 <p>Before we document the FFTW MPI interface in detail, we begin with a | |
60 simple example outlining how one would perform a two-dimensional | |
61 <code>N0</code> by <code>N1</code> complex DFT. | |
62 | |
63 <pre class="example"> #include <fftw3-mpi.h> | |
64 | |
65 int main(int argc, char **argv) | |
66 { | |
67 const ptrdiff_t N0 = ..., N1 = ...; | |
68 fftw_plan plan; | |
69 fftw_complex *data; | |
70 ptrdiff_t alloc_local, local_n0, local_0_start, i, j; | |
71 | |
72 MPI_Init(&argc, &argv); | |
73 fftw_mpi_init(); | |
74 | |
75 /* <span class="roman">get local data size and allocate</span> */ | |
76 alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD, | |
77 &local_n0, &local_0_start); | |
78 data = fftw_alloc_complex(alloc_local); | |
79 | |
80 /* <span class="roman">create plan for in-place forward DFT</span> */ | |
81 plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD, | |
82 FFTW_FORWARD, FFTW_ESTIMATE); | |
83 | |
84 /* <span class="roman">initialize data to some function</span> my_function(x,y) */ | |
85 for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j) | |
86 data[i*N1 + j] = my_function(local_0_start + i, j); | |
87 | |
88 /* <span class="roman">compute transforms, in-place, as many times as desired</span> */ | |
89 fftw_execute(plan); | |
90 | |
91 fftw_destroy_plan(plan); | |
92 | |
93 MPI_Finalize(); | |
94 } | |
95 </pre> | |
96 <p>As can be seen above, the MPI interface follows the same basic style | |
97 of allocate/plan/execute/destroy as the serial FFTW routines. All of | |
98 the MPI-specific routines are prefixed with ‘<samp><span class="samp">fftw_mpi_</span></samp>’ instead | |
99 of ‘<samp><span class="samp">fftw_</span></samp>’. There are a few important differences, however: | |
100 | |
101 <p>First, we must call <code>fftw_mpi_init()</code> after calling | |
102 <code>MPI_Init</code> (required in all MPI programs) and before calling any | |
103 other ‘<samp><span class="samp">fftw_mpi_</span></samp>’ routine. | |
104 <a name="index-MPI_005fInit-357"></a><a name="index-fftw_005fmpi_005finit-358"></a> | |
105 | |
106 <p>Second, when we create the plan with <code>fftw_mpi_plan_dft_2d</code>, | |
107 analogous to <code>fftw_plan_dft_2d</code>, we pass an additional argument: | |
108 the communicator, indicating which processes will participate in the | |
109 transform (here <code>MPI_COMM_WORLD</code>, indicating all processes). | |
110 Whenever you create, execute, or destroy a plan for an MPI transform, | |
111 you must call the corresponding FFTW routine on <em>all</em> processes | |
112 in the communicator for that transform. (That is, these are | |
113 <em>collective</em> calls.) Note that the plan for the MPI transform | |
114 uses the standard <code>fftw_execute</code> and <code>fftw_destroy</code> routines | |
115 (on the other hand, there are MPI-specific new-array execute functions | |
116 documented below). | |
117 <a name="index-collective-function-359"></a><a name="index-fftw_005fmpi_005fplan_005fdft_005f2d-360"></a><a name="index-MPI_005fCOMM_005fWORLD-361"></a> | |
118 | |
119 <p>Third, all of the FFTW MPI routines take <code>ptrdiff_t</code> arguments | |
120 instead of <code>int</code> as for the serial FFTW. <code>ptrdiff_t</code> is a | |
121 standard C integer type which is (at least) 32 bits wide on a 32-bit | |
122 machine and 64 bits wide on a 64-bit machine. This is to make it easy | |
123 to specify very large parallel transforms on a 64-bit machine. (You | |
124 can specify 64-bit transform sizes in the serial FFTW, too, but only | |
125 by using the ‘<samp><span class="samp">guru64</span></samp>’ planner interface. See <a href="64_002dbit-Guru-Interface.html#g_t64_002dbit-Guru-Interface">64-bit Guru Interface</a>.) | |
126 <a name="index-ptrdiff_005ft-362"></a><a name="index-g_t64_002dbit-architecture-363"></a> | |
127 | |
128 <p>Fourth, and most importantly, you don't allocate the entire | |
129 two-dimensional array on each process. Instead, you call | |
130 <code>fftw_mpi_local_size_2d</code> to find out what <em>portion</em> of the | |
131 array resides on each processor, and how much space to allocate. | |
132 Here, the portion of the array on each process is a <code>local_n0</code> by | |
133 <code>N1</code> slice of the total array, starting at index | |
134 <code>local_0_start</code>. The total number of <code>fftw_complex</code> numbers | |
135 to allocate is given by the <code>alloc_local</code> return value, which | |
136 <em>may</em> be greater than <code>local_n0 * N1</code> (in case some | |
137 intermediate calculations require additional storage). The data | |
138 distribution in FFTW's MPI interface is described in more detail by | |
139 the next section. | |
140 <a name="index-fftw_005fmpi_005flocal_005fsize_005f2d-364"></a><a name="index-data-distribution-365"></a> | |
141 | |
142 <p>Given the portion of the array that resides on the local process, it | |
143 is straightforward to initialize the data (here to a function | |
144 <code>myfunction</code>) and otherwise manipulate it. Of course, at the end | |
145 of the program you may want to output the data somehow, but | |
146 synchronizing this output is up to you and is beyond the scope of this | |
147 manual. (One good way to output a large multi-dimensional distributed | |
148 array in MPI to a portable binary file is to use the free HDF5 | |
149 library; see the <a href="http://www.hdfgroup.org/">HDF home page</a>.) | |
150 <a name="index-HDF5-366"></a><a name="index-MPI-I_002fO-367"></a> | |
151 <!-- --> | |
152 | |
153 </body></html> | |
154 |