comparison src/fftw-3.3.5/doc/html/Combining-MPI-and-Threads.html @ 127:7867fa7e1b6b

Current fftw source
author Chris Cannam <cannam@all-day-breakfast.com>
date Tue, 18 Oct 2016 13:40:26 +0100
parents
children
comparison
equal deleted inserted replaced
126:4a7071416412 127:7867fa7e1b6b
1 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
2 <html>
3 <!-- This manual is for FFTW
4 (version 3.3.5, 30 July 2016).
5
6 Copyright (C) 2003 Matteo Frigo.
7
8 Copyright (C) 2003 Massachusetts Institute of Technology.
9
10 Permission is granted to make and distribute verbatim copies of this
11 manual provided the copyright notice and this permission notice are
12 preserved on all copies.
13
14 Permission is granted to copy and distribute modified versions of this
15 manual under the conditions for verbatim copying, provided that the
16 entire resulting derived work is distributed under the terms of a
17 permission notice identical to this one.
18
19 Permission is granted to copy and distribute translations of this manual
20 into another language, under the above conditions for modified versions,
21 except that this permission notice may be stated in a translation
22 approved by the Free Software Foundation. -->
23 <!-- Created by GNU Texinfo 5.2, http://www.gnu.org/software/texinfo/ -->
24 <head>
25 <title>FFTW 3.3.5: Combining MPI and Threads</title>
26
27 <meta name="description" content="FFTW 3.3.5: Combining MPI and Threads">
28 <meta name="keywords" content="FFTW 3.3.5: Combining MPI and Threads">
29 <meta name="resource-type" content="document">
30 <meta name="distribution" content="global">
31 <meta name="Generator" content="makeinfo">
32 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
33 <link href="index.html#Top" rel="start" title="Top">
34 <link href="Concept-Index.html#Concept-Index" rel="index" title="Concept Index">
35 <link href="index.html#SEC_Contents" rel="contents" title="Table of Contents">
36 <link href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" rel="up" title="Distributed-memory FFTW with MPI">
37 <link href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" rel="next" title="FFTW MPI Reference">
38 <link href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" rel="prev" title="FFTW MPI Performance Tips">
39 <style type="text/css">
40 <!--
41 a.summary-letter {text-decoration: none}
42 blockquote.smallquotation {font-size: smaller}
43 div.display {margin-left: 3.2em}
44 div.example {margin-left: 3.2em}
45 div.indentedblock {margin-left: 3.2em}
46 div.lisp {margin-left: 3.2em}
47 div.smalldisplay {margin-left: 3.2em}
48 div.smallexample {margin-left: 3.2em}
49 div.smallindentedblock {margin-left: 3.2em; font-size: smaller}
50 div.smalllisp {margin-left: 3.2em}
51 kbd {font-style:oblique}
52 pre.display {font-family: inherit}
53 pre.format {font-family: inherit}
54 pre.menu-comment {font-family: serif}
55 pre.menu-preformatted {font-family: serif}
56 pre.smalldisplay {font-family: inherit; font-size: smaller}
57 pre.smallexample {font-size: smaller}
58 pre.smallformat {font-family: inherit; font-size: smaller}
59 pre.smalllisp {font-size: smaller}
60 span.nocodebreak {white-space:nowrap}
61 span.nolinebreak {white-space:nowrap}
62 span.roman {font-family:serif; font-weight:normal}
63 span.sansserif {font-family:sans-serif; font-weight:normal}
64 ul.no-bullet {list-style: none}
65 -->
66 </style>
67
68
69 </head>
70
71 <body lang="en" bgcolor="#FFFFFF" text="#000000" link="#0000FF" vlink="#800080" alink="#FF0000">
72 <a name="Combining-MPI-and-Threads"></a>
73 <div class="header">
74 <p>
75 Next: <a href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" accesskey="n" rel="next">FFTW MPI Reference</a>, Previous: <a href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" accesskey="p" rel="prev">FFTW MPI Performance Tips</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p>
76 </div>
77 <hr>
78 <a name="Combining-MPI-and-Threads-1"></a>
79 <h3 class="section">6.11 Combining MPI and Threads</h3>
80 <a name="index-threads-2"></a>
81
82 <p>In certain cases, it may be advantageous to combine MPI
83 (distributed-memory) and threads (shared-memory) parallelization.
84 FFTW supports this, with certain caveats. For example, if you have a
85 cluster of 4-processor shared-memory nodes, you may want to use
86 threads within the nodes and MPI between the nodes, instead of MPI for
87 all parallelization.
88 </p>
89 <p>In particular, it is possible to seamlessly combine the MPI FFTW
90 routines with the multi-threaded FFTW routines (see <a href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>). However, some care must be taken in the initialization code,
91 which should look something like this:
92 </p>
93 <div class="example">
94 <pre class="example">int threads_ok;
95
96 int main(int argc, char **argv)
97 {
98 int provided;
99 MPI_Init_thread(&amp;argc, &amp;argv, MPI_THREAD_FUNNELED, &amp;provided);
100 threads_ok = provided &gt;= MPI_THREAD_FUNNELED;
101
102 if (threads_ok) threads_ok = fftw_init_threads();
103 fftw_mpi_init();
104
105 ...
106 if (threads_ok) fftw_plan_with_nthreads(...);
107 ...
108
109 MPI_Finalize();
110 }
111 </pre></div>
112 <a name="index-fftw_005fmpi_005finit-3"></a>
113 <a name="index-fftw_005finit_005fthreads-2"></a>
114 <a name="index-fftw_005fplan_005fwith_005fnthreads-1"></a>
115
116 <p>First, note that instead of calling <code>MPI_Init</code>, you should call
117 <code>MPI_Init_threads</code>, which is the initialization routine defined
118 by the MPI-2 standard to indicate to MPI that your program will be
119 multithreaded. We pass <code>MPI_THREAD_FUNNELED</code>, which indicates
120 that we will only call MPI routines from the main thread. (FFTW will
121 launch additional threads internally, but the extra threads will not
122 call MPI code.) (You may also pass <code>MPI_THREAD_SERIALIZED</code> or
123 <code>MPI_THREAD_MULTIPLE</code>, which requests additional multithreading
124 support from the MPI implementation, but this is not required by
125 FFTW.) The <code>provided</code> parameter returns what level of threads
126 support is actually supported by your MPI implementation; this
127 <em>must</em> be at least <code>MPI_THREAD_FUNNELED</code> if you want to call
128 the FFTW threads routines, so we define a global variable
129 <code>threads_ok</code> to record this. You should only call
130 <code>fftw_init_threads</code> or <code>fftw_plan_with_nthreads</code> if
131 <code>threads_ok</code> is true. For more information on thread safety in
132 MPI, see the
133 <a href="http://www.mpi-forum.org/docs/mpi-20-html/node162.htm">MPI and
134 Threads</a> section of the MPI-2 standard.
135 <a name="index-thread-safety-2"></a>
136 </p>
137
138 <p>Second, we must call <code>fftw_init_threads</code> <em>before</em>
139 <code>fftw_mpi_init</code>. This is critical for technical reasons having
140 to do with how FFTW initializes its list of algorithms.
141 </p>
142 <p>Then, if you call <code>fftw_plan_with_nthreads(N)</code>, <em>every</em> MPI
143 process will launch (up to) <code>N</code> threads to parallelize its transforms.
144 </p>
145 <p>For example, in the hypothetical cluster of 4-processor nodes, you
146 might wish to launch only a single MPI process per node, and then call
147 <code>fftw_plan_with_nthreads(4)</code> on each process to use all
148 processors in the nodes.
149 </p>
150 <p>This may or may not be faster than simply using as many MPI processes
151 as you have processors, however. On the one hand, using threads
152 within a node eliminates the need for explicit message passing within
153 the node. On the other hand, FFTW&rsquo;s transpose routines are not
154 multi-threaded, and this means that the communications that do take
155 place will not benefit from parallelization within the node.
156 Moreover, many MPI implementations already have optimizations to
157 exploit shared memory when it is available, so adding the
158 multithreaded FFTW on top of this may be superfluous.
159 <a name="index-transpose-4"></a>
160 </p>
161 <hr>
162 <div class="header">
163 <p>
164 Next: <a href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" accesskey="n" rel="next">FFTW MPI Reference</a>, Previous: <a href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" accesskey="p" rel="prev">FFTW MPI Performance Tips</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p>
165 </div>
166
167
168
169 </body>
170 </html>