d@0
|
1 <html lang="en">
|
d@0
|
2 <head>
|
d@0
|
3 <title>Combining MPI and Threads - FFTW 3.2alpha3</title>
|
d@0
|
4 <meta http-equiv="Content-Type" content="text/html">
|
d@0
|
5 <meta name="description" content="FFTW 3.2alpha3">
|
d@0
|
6 <meta name="generator" content="makeinfo 4.8">
|
d@0
|
7 <link title="Top" rel="start" href="index.html#Top">
|
d@0
|
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
|
d@0
|
9 <link rel="prev" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" title="FFTW MPI Performance Tips">
|
d@0
|
10 <link rel="next" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" title="FFTW MPI Reference">
|
d@0
|
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
|
d@0
|
12 <!--
|
d@0
|
13 This manual is for FFTW
|
d@0
|
14 (version 3.2alpha3, 14 August 2007).
|
d@0
|
15
|
d@0
|
16 Copyright (C) 2003 Matteo Frigo.
|
d@0
|
17
|
d@0
|
18 Copyright (C) 2003 Massachusetts Institute of Technology.
|
d@0
|
19
|
d@0
|
20 Permission is granted to make and distribute verbatim copies of
|
d@0
|
21 this manual provided the copyright notice and this permission
|
d@0
|
22 notice are preserved on all copies.
|
d@0
|
23
|
d@0
|
24 Permission is granted to copy and distribute modified versions of
|
d@0
|
25 this manual under the conditions for verbatim copying, provided
|
d@0
|
26 that the entire resulting derived work is distributed under the
|
d@0
|
27 terms of a permission notice identical to this one.
|
d@0
|
28
|
d@0
|
29 Permission is granted to copy and distribute translations of this
|
d@0
|
30 manual into another language, under the above conditions for
|
d@0
|
31 modified versions, except that this permission notice may be
|
d@0
|
32 stated in a translation approved by the Free Software Foundation.
|
d@0
|
33 -->
|
d@0
|
34 <meta http-equiv="Content-Style-Type" content="text/css">
|
d@0
|
35 <style type="text/css"><!--
|
d@0
|
36 pre.display { font-family:inherit }
|
d@0
|
37 pre.format { font-family:inherit }
|
d@0
|
38 pre.smalldisplay { font-family:inherit; font-size:smaller }
|
d@0
|
39 pre.smallformat { font-family:inherit; font-size:smaller }
|
d@0
|
40 pre.smallexample { font-size:smaller }
|
d@0
|
41 pre.smalllisp { font-size:smaller }
|
d@0
|
42 span.sc { font-variant:small-caps }
|
d@0
|
43 span.roman { font-family:serif; font-weight:normal; }
|
d@0
|
44 span.sansserif { font-family:sans-serif; font-weight:normal; }
|
d@0
|
45 --></style>
|
d@0
|
46 </head>
|
d@0
|
47 <body>
|
d@0
|
48 <div class="node">
|
d@0
|
49 <p>
|
d@0
|
50 <a name="Combining-MPI-and-Threads"></a>
|
d@0
|
51 Next: <a rel="next" accesskey="n" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference">FFTW MPI Reference</a>,
|
d@0
|
52 Previous: <a rel="previous" accesskey="p" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips">FFTW MPI Performance Tips</a>,
|
d@0
|
53 Up: <a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
|
d@0
|
54 <hr>
|
d@0
|
55 </div>
|
d@0
|
56
|
d@0
|
57 <h3 class="section">6.11 Combining MPI and Threads</h3>
|
d@0
|
58
|
d@0
|
59 <p><a name="index-threads-398"></a>
|
d@0
|
60 In certain cases, it may be advantageous to combine MPI
|
d@0
|
61 (distributed-memory) and threads (shared-memory) parallelization.
|
d@0
|
62 FFTW supports this, with certain caveats. For example, if you have a
|
d@0
|
63 cluster of 4-processor shared-memory nodes, you may want to use
|
d@0
|
64 threads within the nodes and MPI between the nodes, instead of MPI for
|
d@0
|
65 all parallelization. FFTW's MPI code can also transparently use
|
d@0
|
66 FFTW's Cell processor support (e.g. for clusters of Cell processors).
|
d@0
|
67 <a name="index-Cell-processor-399"></a>
|
d@0
|
68 In particular, it is possible to seamlessly combine the MPI FFTW
|
d@0
|
69 routines with the multi-threaded FFTW routines (see <a href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>). In this case, you will begin your program by calling both
|
d@0
|
70 <code>fftw_mpi_init()</code> and <code>fftw_init_threads()</code>. Then, if you
|
d@0
|
71 call <code>fftw_plan_with_nthreads(N)</code>, then <em>every</em> MPI process
|
d@0
|
72 will launch <code>N</code> threads to parallelize its transforms.
|
d@0
|
73 <a name="index-fftw_005fmpi_005finit-400"></a><a name="index-fftw_005finit_005fthreads-401"></a><a name="index-fftw_005fplan_005fwith_005fnthreads-402"></a>
|
d@0
|
74 For example, in the hypothetical cluster of 4-processor nodes, you
|
d@0
|
75 might wish to launch only a single MPI process per node, and then call
|
d@0
|
76 <code>fftw_plan_with_nthreads(4)</code> on each process to use all
|
d@0
|
77 processors in the nodes.
|
d@0
|
78
|
d@0
|
79 <p>This may or may not be faster than simply using as many MPI processes
|
d@0
|
80 as you have processors, however. On the one hand, using threads within a
|
d@0
|
81 node eliminates the need for explicit message passing within the node.
|
d@0
|
82 On the other hand, FFTW's transpose routines are not multi-threaded,
|
d@0
|
83 and this means that the communications that do take place will not
|
d@0
|
84 benefit from parallelization within the node. Moreover, many MPI
|
d@0
|
85 implementations already have optimizations to exploit shared memory
|
d@0
|
86 when it is available.
|
d@0
|
87 <a name="index-transpose-403"></a>
|
d@0
|
88 (Note that this is quite independent of whether MPI itself is
|
d@0
|
89 thread-safe or multi-threaded: regardless of how many threads you
|
d@0
|
90 specify with <code>fftw_plan_with_nthreads</code>, FFTW will perform all of
|
d@0
|
91 its MPI communication only from the parent process.)
|
d@0
|
92 <a name="index-thread-safety-404"></a>
|
d@0
|
93 <!-- -->
|
d@0
|
94
|
d@0
|
95 </body></html>
|
d@0
|
96
|