comparison src/fftw-3.3.3/doc/html/Combining-MPI-and-Threads.html @ 95:89f5e221ed7b

Add FFTW3
author Chris Cannam <cannam@all-day-breakfast.com>
date Wed, 20 Mar 2013 15:35:50 +0000
parents
children
comparison
equal deleted inserted replaced
94:d278df1123f9 95:89f5e221ed7b
1 <html lang="en">
2 <head>
3 <title>Combining MPI and Threads - FFTW 3.3.3</title>
4 <meta http-equiv="Content-Type" content="text/html">
5 <meta name="description" content="FFTW 3.3.3">
6 <meta name="generator" content="makeinfo 4.13">
7 <link title="Top" rel="start" href="index.html#Top">
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
9 <link rel="prev" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" title="FFTW MPI Performance Tips">
10 <link rel="next" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" title="FFTW MPI Reference">
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
12 <!--
13 This manual is for FFTW
14 (version 3.3.3, 25 November 2012).
15
16 Copyright (C) 2003 Matteo Frigo.
17
18 Copyright (C) 2003 Massachusetts Institute of Technology.
19
20 Permission is granted to make and distribute verbatim copies of
21 this manual provided the copyright notice and this permission
22 notice are preserved on all copies.
23
24 Permission is granted to copy and distribute modified versions of
25 this manual under the conditions for verbatim copying, provided
26 that the entire resulting derived work is distributed under the
27 terms of a permission notice identical to this one.
28
29 Permission is granted to copy and distribute translations of this
30 manual into another language, under the above conditions for
31 modified versions, except that this permission notice may be
32 stated in a translation approved by the Free Software Foundation.
33 -->
34 <meta http-equiv="Content-Style-Type" content="text/css">
35 <style type="text/css"><!--
36 pre.display { font-family:inherit }
37 pre.format { font-family:inherit }
38 pre.smalldisplay { font-family:inherit; font-size:smaller }
39 pre.smallformat { font-family:inherit; font-size:smaller }
40 pre.smallexample { font-size:smaller }
41 pre.smalllisp { font-size:smaller }
42 span.sc { font-variant:small-caps }
43 span.roman { font-family:serif; font-weight:normal; }
44 span.sansserif { font-family:sans-serif; font-weight:normal; }
45 --></style>
46 </head>
47 <body>
48 <div class="node">
49 <a name="Combining-MPI-and-Threads"></a>
50 <p>
51 Next:&nbsp;<a rel="next" accesskey="n" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference">FFTW MPI Reference</a>,
52 Previous:&nbsp;<a rel="previous" accesskey="p" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips">FFTW MPI Performance Tips</a>,
53 Up:&nbsp;<a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
54 <hr>
55 </div>
56
57 <h3 class="section">6.11 Combining MPI and Threads</h3>
58
59 <p><a name="index-threads-427"></a>
60 In certain cases, it may be advantageous to combine MPI
61 (distributed-memory) and threads (shared-memory) parallelization.
62 FFTW supports this, with certain caveats. For example, if you have a
63 cluster of 4-processor shared-memory nodes, you may want to use
64 threads within the nodes and MPI between the nodes, instead of MPI for
65 all parallelization.
66
67 <p>In particular, it is possible to seamlessly combine the MPI FFTW
68 routines with the multi-threaded FFTW routines (see <a href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>). However, some care must be taken in the initialization code,
69 which should look something like this:
70
71 <pre class="example"> int threads_ok;
72
73 int main(int argc, char **argv)
74 {
75 int provided;
76 MPI_Init_thread(&amp;argc, &amp;argv, MPI_THREAD_FUNNELED, &amp;provided);
77 threads_ok = provided &gt;= MPI_THREAD_FUNNELED;
78
79 if (threads_ok) threads_ok = fftw_init_threads();
80 fftw_mpi_init();
81
82 ...
83 if (threads_ok) fftw_plan_with_nthreads(...);
84 ...
85
86 MPI_Finalize();
87 }
88 </pre>
89 <p><a name="index-fftw_005fmpi_005finit-428"></a><a name="index-fftw_005finit_005fthreads-429"></a><a name="index-fftw_005fplan_005fwith_005fnthreads-430"></a>
90 First, note that instead of calling <code>MPI_Init</code>, you should call
91 <code>MPI_Init_threads</code>, which is the initialization routine defined
92 by the MPI-2 standard to indicate to MPI that your program will be
93 multithreaded. We pass <code>MPI_THREAD_FUNNELED</code>, which indicates
94 that we will only call MPI routines from the main thread. (FFTW will
95 launch additional threads internally, but the extra threads will not
96 call MPI code.) (You may also pass <code>MPI_THREAD_SERIALIZED</code> or
97 <code>MPI_THREAD_MULTIPLE</code>, which requests additional multithreading
98 support from the MPI implementation, but this is not required by
99 FFTW.) The <code>provided</code> parameter returns what level of threads
100 support is actually supported by your MPI implementation; this
101 <em>must</em> be at least <code>MPI_THREAD_FUNNELED</code> if you want to call
102 the FFTW threads routines, so we define a global variable
103 <code>threads_ok</code> to record this. You should only call
104 <code>fftw_init_threads</code> or <code>fftw_plan_with_nthreads</code> if
105 <code>threads_ok</code> is true. For more information on thread safety in
106 MPI, see the
107 <a href="http://www.mpi-forum.org/docs/mpi-20-html/node162.htm">MPI and Threads</a> section of the MPI-2 standard.
108 <a name="index-thread-safety-431"></a>
109
110 <p>Second, we must call <code>fftw_init_threads</code> <em>before</em>
111 <code>fftw_mpi_init</code>. This is critical for technical reasons having
112 to do with how FFTW initializes its list of algorithms.
113
114 <p>Then, if you call <code>fftw_plan_with_nthreads(N)</code>, <em>every</em> MPI
115 process will launch (up to) <code>N</code> threads to parallelize its transforms.
116
117 <p>For example, in the hypothetical cluster of 4-processor nodes, you
118 might wish to launch only a single MPI process per node, and then call
119 <code>fftw_plan_with_nthreads(4)</code> on each process to use all
120 processors in the nodes.
121
122 <p>This may or may not be faster than simply using as many MPI processes
123 as you have processors, however. On the one hand, using threads
124 within a node eliminates the need for explicit message passing within
125 the node. On the other hand, FFTW's transpose routines are not
126 multi-threaded, and this means that the communications that do take
127 place will not benefit from parallelization within the node.
128 Moreover, many MPI implementations already have optimizations to
129 exploit shared memory when it is available, so adding the
130 multithreaded FFTW on top of this may be superfluous.
131 <a name="index-transpose-432"></a>
132 <!-- -->
133
134 </body></html>
135