comparison Lib/fftw-3.2.1/doc/html/.svn/text-base/Combining-MPI-and-Threads.html.svn-base @ 0:25bf17994ef1

First commit. VS2013, Codeblocks and Mac OSX configuration
author Geogaddi\David <d.m.ronan@qmul.ac.uk>
date Thu, 09 Jul 2015 01:12:16 +0100
parents
children
comparison
equal deleted inserted replaced
-1:000000000000 0:25bf17994ef1
1 <html lang="en">
2 <head>
3 <title>Combining MPI and Threads - FFTW 3.2alpha3</title>
4 <meta http-equiv="Content-Type" content="text/html">
5 <meta name="description" content="FFTW 3.2alpha3">
6 <meta name="generator" content="makeinfo 4.8">
7 <link title="Top" rel="start" href="index.html#Top">
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
9 <link rel="prev" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" title="FFTW MPI Performance Tips">
10 <link rel="next" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" title="FFTW MPI Reference">
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
12 <!--
13 This manual is for FFTW
14 (version 3.2alpha3, 14 August 2007).
15
16 Copyright (C) 2003 Matteo Frigo.
17
18 Copyright (C) 2003 Massachusetts Institute of Technology.
19
20 Permission is granted to make and distribute verbatim copies of
21 this manual provided the copyright notice and this permission
22 notice are preserved on all copies.
23
24 Permission is granted to copy and distribute modified versions of
25 this manual under the conditions for verbatim copying, provided
26 that the entire resulting derived work is distributed under the
27 terms of a permission notice identical to this one.
28
29 Permission is granted to copy and distribute translations of this
30 manual into another language, under the above conditions for
31 modified versions, except that this permission notice may be
32 stated in a translation approved by the Free Software Foundation.
33 -->
34 <meta http-equiv="Content-Style-Type" content="text/css">
35 <style type="text/css"><!--
36 pre.display { font-family:inherit }
37 pre.format { font-family:inherit }
38 pre.smalldisplay { font-family:inherit; font-size:smaller }
39 pre.smallformat { font-family:inherit; font-size:smaller }
40 pre.smallexample { font-size:smaller }
41 pre.smalllisp { font-size:smaller }
42 span.sc { font-variant:small-caps }
43 span.roman { font-family:serif; font-weight:normal; }
44 span.sansserif { font-family:sans-serif; font-weight:normal; }
45 --></style>
46 </head>
47 <body>
48 <div class="node">
49 <p>
50 <a name="Combining-MPI-and-Threads"></a>
51 Next:&nbsp;<a rel="next" accesskey="n" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference">FFTW MPI Reference</a>,
52 Previous:&nbsp;<a rel="previous" accesskey="p" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips">FFTW MPI Performance Tips</a>,
53 Up:&nbsp;<a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
54 <hr>
55 </div>
56
57 <h3 class="section">6.11 Combining MPI and Threads</h3>
58
59 <p><a name="index-threads-398"></a>
60 In certain cases, it may be advantageous to combine MPI
61 (distributed-memory) and threads (shared-memory) parallelization.
62 FFTW supports this, with certain caveats. For example, if you have a
63 cluster of 4-processor shared-memory nodes, you may want to use
64 threads within the nodes and MPI between the nodes, instead of MPI for
65 all parallelization. FFTW's MPI code can also transparently use
66 FFTW's Cell processor support (e.g. for clusters of Cell processors).
67 <a name="index-Cell-processor-399"></a>
68 In particular, it is possible to seamlessly combine the MPI FFTW
69 routines with the multi-threaded FFTW routines (see <a href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>). In this case, you will begin your program by calling both
70 <code>fftw_mpi_init()</code> and <code>fftw_init_threads()</code>. Then, if you
71 call <code>fftw_plan_with_nthreads(N)</code>, then <em>every</em> MPI process
72 will launch <code>N</code> threads to parallelize its transforms.
73 <a name="index-fftw_005fmpi_005finit-400"></a><a name="index-fftw_005finit_005fthreads-401"></a><a name="index-fftw_005fplan_005fwith_005fnthreads-402"></a>
74 For example, in the hypothetical cluster of 4-processor nodes, you
75 might wish to launch only a single MPI process per node, and then call
76 <code>fftw_plan_with_nthreads(4)</code> on each process to use all
77 processors in the nodes.
78
79 <p>This may or may not be faster than simply using as many MPI processes
80 as you have processors, however. On the one hand, using threads within a
81 node eliminates the need for explicit message passing within the node.
82 On the other hand, FFTW's transpose routines are not multi-threaded,
83 and this means that the communications that do take place will not
84 benefit from parallelization within the node. Moreover, many MPI
85 implementations already have optimizations to exploit shared memory
86 when it is available.
87 <a name="index-transpose-403"></a>
88 (Note that this is quite independent of whether MPI itself is
89 thread-safe or multi-threaded: regardless of how many threads you
90 specify with <code>fftw_plan_with_nthreads</code>, FFTW will perform all of
91 its MPI communication only from the parent process.)
92 <a name="index-thread-safety-404"></a>
93 <!-- -->
94
95 </body></html>
96