| cannam@127 | 1 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> | 
| cannam@127 | 2 <html> | 
| cannam@127 | 3 <!-- This manual is for FFTW | 
| cannam@127 | 4 (version 3.3.5, 30 July 2016). | 
| cannam@127 | 5 | 
| cannam@127 | 6 Copyright (C) 2003 Matteo Frigo. | 
| cannam@127 | 7 | 
| cannam@127 | 8 Copyright (C) 2003 Massachusetts Institute of Technology. | 
| cannam@127 | 9 | 
| cannam@127 | 10 Permission is granted to make and distribute verbatim copies of this | 
| cannam@127 | 11 manual provided the copyright notice and this permission notice are | 
| cannam@127 | 12 preserved on all copies. | 
| cannam@127 | 13 | 
| cannam@127 | 14 Permission is granted to copy and distribute modified versions of this | 
| cannam@127 | 15 manual under the conditions for verbatim copying, provided that the | 
| cannam@127 | 16 entire resulting derived work is distributed under the terms of a | 
| cannam@127 | 17 permission notice identical to this one. | 
| cannam@127 | 18 | 
| cannam@127 | 19 Permission is granted to copy and distribute translations of this manual | 
| cannam@127 | 20 into another language, under the above conditions for modified versions, | 
| cannam@127 | 21 except that this permission notice may be stated in a translation | 
| cannam@127 | 22 approved by the Free Software Foundation. --> | 
| cannam@127 | 23 <!-- Created by GNU Texinfo 5.2, http://www.gnu.org/software/texinfo/ --> | 
| cannam@127 | 24 <head> | 
| cannam@127 | 25 <title>FFTW 3.3.5: Combining MPI and Threads</title> | 
| cannam@127 | 26 | 
| cannam@127 | 27 <meta name="description" content="FFTW 3.3.5: Combining MPI and Threads"> | 
| cannam@127 | 28 <meta name="keywords" content="FFTW 3.3.5: Combining MPI and Threads"> | 
| cannam@127 | 29 <meta name="resource-type" content="document"> | 
| cannam@127 | 30 <meta name="distribution" content="global"> | 
| cannam@127 | 31 <meta name="Generator" content="makeinfo"> | 
| cannam@127 | 32 <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> | 
| cannam@127 | 33 <link href="index.html#Top" rel="start" title="Top"> | 
| cannam@127 | 34 <link href="Concept-Index.html#Concept-Index" rel="index" title="Concept Index"> | 
| cannam@127 | 35 <link href="index.html#SEC_Contents" rel="contents" title="Table of Contents"> | 
| cannam@127 | 36 <link href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" rel="up" title="Distributed-memory FFTW with MPI"> | 
| cannam@127 | 37 <link href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" rel="next" title="FFTW MPI Reference"> | 
| cannam@127 | 38 <link href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" rel="prev" title="FFTW MPI Performance Tips"> | 
| cannam@127 | 39 <style type="text/css"> | 
| cannam@127 | 40 <!-- | 
| cannam@127 | 41 a.summary-letter {text-decoration: none} | 
| cannam@127 | 42 blockquote.smallquotation {font-size: smaller} | 
| cannam@127 | 43 div.display {margin-left: 3.2em} | 
| cannam@127 | 44 div.example {margin-left: 3.2em} | 
| cannam@127 | 45 div.indentedblock {margin-left: 3.2em} | 
| cannam@127 | 46 div.lisp {margin-left: 3.2em} | 
| cannam@127 | 47 div.smalldisplay {margin-left: 3.2em} | 
| cannam@127 | 48 div.smallexample {margin-left: 3.2em} | 
| cannam@127 | 49 div.smallindentedblock {margin-left: 3.2em; font-size: smaller} | 
| cannam@127 | 50 div.smalllisp {margin-left: 3.2em} | 
| cannam@127 | 51 kbd {font-style:oblique} | 
| cannam@127 | 52 pre.display {font-family: inherit} | 
| cannam@127 | 53 pre.format {font-family: inherit} | 
| cannam@127 | 54 pre.menu-comment {font-family: serif} | 
| cannam@127 | 55 pre.menu-preformatted {font-family: serif} | 
| cannam@127 | 56 pre.smalldisplay {font-family: inherit; font-size: smaller} | 
| cannam@127 | 57 pre.smallexample {font-size: smaller} | 
| cannam@127 | 58 pre.smallformat {font-family: inherit; font-size: smaller} | 
| cannam@127 | 59 pre.smalllisp {font-size: smaller} | 
| cannam@127 | 60 span.nocodebreak {white-space:nowrap} | 
| cannam@127 | 61 span.nolinebreak {white-space:nowrap} | 
| cannam@127 | 62 span.roman {font-family:serif; font-weight:normal} | 
| cannam@127 | 63 span.sansserif {font-family:sans-serif; font-weight:normal} | 
| cannam@127 | 64 ul.no-bullet {list-style: none} | 
| cannam@127 | 65 --> | 
| cannam@127 | 66 </style> | 
| cannam@127 | 67 | 
| cannam@127 | 68 | 
| cannam@127 | 69 </head> | 
| cannam@127 | 70 | 
| cannam@127 | 71 <body lang="en" bgcolor="#FFFFFF" text="#000000" link="#0000FF" vlink="#800080" alink="#FF0000"> | 
| cannam@127 | 72 <a name="Combining-MPI-and-Threads"></a> | 
| cannam@127 | 73 <div class="header"> | 
| cannam@127 | 74 <p> | 
| cannam@127 | 75 Next: <a href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" accesskey="n" rel="next">FFTW MPI Reference</a>, Previous: <a href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" accesskey="p" rel="prev">FFTW MPI Performance Tips</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a>   [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p> | 
| cannam@127 | 76 </div> | 
| cannam@127 | 77 <hr> | 
| cannam@127 | 78 <a name="Combining-MPI-and-Threads-1"></a> | 
| cannam@127 | 79 <h3 class="section">6.11 Combining MPI and Threads</h3> | 
| cannam@127 | 80 <a name="index-threads-2"></a> | 
| cannam@127 | 81 | 
| cannam@127 | 82 <p>In certain cases, it may be advantageous to combine MPI | 
| cannam@127 | 83 (distributed-memory) and threads (shared-memory) parallelization. | 
| cannam@127 | 84 FFTW supports this, with certain caveats.  For example, if you have a | 
| cannam@127 | 85 cluster of 4-processor shared-memory nodes, you may want to use | 
| cannam@127 | 86 threads within the nodes and MPI between the nodes, instead of MPI for | 
| cannam@127 | 87 all parallelization. | 
| cannam@127 | 88 </p> | 
| cannam@127 | 89 <p>In particular, it is possible to seamlessly combine the MPI FFTW | 
| cannam@127 | 90 routines with the multi-threaded FFTW routines (see <a href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>). However, some care must be taken in the initialization code, | 
| cannam@127 | 91 which should look something like this: | 
| cannam@127 | 92 </p> | 
| cannam@127 | 93 <div class="example"> | 
| cannam@127 | 94 <pre class="example">int threads_ok; | 
| cannam@127 | 95 | 
| cannam@127 | 96 int main(int argc, char **argv) | 
| cannam@127 | 97 { | 
| cannam@127 | 98     int provided; | 
| cannam@127 | 99     MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided); | 
| cannam@127 | 100     threads_ok = provided >= MPI_THREAD_FUNNELED; | 
| cannam@127 | 101 | 
| cannam@127 | 102     if (threads_ok) threads_ok = fftw_init_threads(); | 
| cannam@127 | 103     fftw_mpi_init(); | 
| cannam@127 | 104 | 
| cannam@127 | 105     ... | 
| cannam@127 | 106     if (threads_ok) fftw_plan_with_nthreads(...); | 
| cannam@127 | 107     ... | 
| cannam@127 | 108 | 
| cannam@127 | 109     MPI_Finalize(); | 
| cannam@127 | 110 } | 
| cannam@127 | 111 </pre></div> | 
| cannam@127 | 112 <a name="index-fftw_005fmpi_005finit-3"></a> | 
| cannam@127 | 113 <a name="index-fftw_005finit_005fthreads-2"></a> | 
| cannam@127 | 114 <a name="index-fftw_005fplan_005fwith_005fnthreads-1"></a> | 
| cannam@127 | 115 | 
| cannam@127 | 116 <p>First, note that instead of calling <code>MPI_Init</code>, you should call | 
| cannam@127 | 117 <code>MPI_Init_threads</code>, which is the initialization routine defined | 
| cannam@127 | 118 by the MPI-2 standard to indicate to MPI that your program will be | 
| cannam@127 | 119 multithreaded.  We pass <code>MPI_THREAD_FUNNELED</code>, which indicates | 
| cannam@127 | 120 that we will only call MPI routines from the main thread.  (FFTW will | 
| cannam@127 | 121 launch additional threads internally, but the extra threads will not | 
| cannam@127 | 122 call MPI code.)  (You may also pass <code>MPI_THREAD_SERIALIZED</code> or | 
| cannam@127 | 123 <code>MPI_THREAD_MULTIPLE</code>, which requests additional multithreading | 
| cannam@127 | 124 support from the MPI implementation, but this is not required by | 
| cannam@127 | 125 FFTW.)  The <code>provided</code> parameter returns what level of threads | 
| cannam@127 | 126 support is actually supported by your MPI implementation; this | 
| cannam@127 | 127 <em>must</em> be at least <code>MPI_THREAD_FUNNELED</code> if you want to call | 
| cannam@127 | 128 the FFTW threads routines, so we define a global variable | 
| cannam@127 | 129 <code>threads_ok</code> to record this.  You should only call | 
| cannam@127 | 130 <code>fftw_init_threads</code> or <code>fftw_plan_with_nthreads</code> if | 
| cannam@127 | 131 <code>threads_ok</code> is true.  For more information on thread safety in | 
| cannam@127 | 132 MPI, see the | 
| cannam@127 | 133 <a href="http://www.mpi-forum.org/docs/mpi-20-html/node162.htm">MPI and | 
| cannam@127 | 134 Threads</a> section of the MPI-2 standard. | 
| cannam@127 | 135 <a name="index-thread-safety-2"></a> | 
| cannam@127 | 136 </p> | 
| cannam@127 | 137 | 
| cannam@127 | 138 <p>Second, we must call <code>fftw_init_threads</code> <em>before</em> | 
| cannam@127 | 139 <code>fftw_mpi_init</code>.  This is critical for technical reasons having | 
| cannam@127 | 140 to do with how FFTW initializes its list of algorithms. | 
| cannam@127 | 141 </p> | 
| cannam@127 | 142 <p>Then, if you call <code>fftw_plan_with_nthreads(N)</code>, <em>every</em> MPI | 
| cannam@127 | 143 process will launch (up to) <code>N</code> threads to parallelize its transforms. | 
| cannam@127 | 144 </p> | 
| cannam@127 | 145 <p>For example, in the hypothetical cluster of 4-processor nodes, you | 
| cannam@127 | 146 might wish to launch only a single MPI process per node, and then call | 
| cannam@127 | 147 <code>fftw_plan_with_nthreads(4)</code> on each process to use all | 
| cannam@127 | 148 processors in the nodes. | 
| cannam@127 | 149 </p> | 
| cannam@127 | 150 <p>This may or may not be faster than simply using as many MPI processes | 
| cannam@127 | 151 as you have processors, however.  On the one hand, using threads | 
| cannam@127 | 152 within a node eliminates the need for explicit message passing within | 
| cannam@127 | 153 the node.  On the other hand, FFTW’s transpose routines are not | 
| cannam@127 | 154 multi-threaded, and this means that the communications that do take | 
| cannam@127 | 155 place will not benefit from parallelization within the node. | 
| cannam@127 | 156 Moreover, many MPI implementations already have optimizations to | 
| cannam@127 | 157 exploit shared memory when it is available, so adding the | 
| cannam@127 | 158 multithreaded FFTW on top of this may be superfluous. | 
| cannam@127 | 159 <a name="index-transpose-4"></a> | 
| cannam@127 | 160 </p> | 
| cannam@127 | 161 <hr> | 
| cannam@127 | 162 <div class="header"> | 
| cannam@127 | 163 <p> | 
| cannam@127 | 164 Next: <a href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" accesskey="n" rel="next">FFTW MPI Reference</a>, Previous: <a href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" accesskey="p" rel="prev">FFTW MPI Performance Tips</a>, Up: <a href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" accesskey="u" rel="up">Distributed-memory FFTW with MPI</a>   [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p> | 
| cannam@127 | 165 </div> | 
| cannam@127 | 166 | 
| cannam@127 | 167 | 
| cannam@127 | 168 | 
| cannam@127 | 169 </body> | 
| cannam@127 | 170 </html> |