annotate fft/fftw/fftw-3.3.4/doc/html/Combining-MPI-and-Threads.html @ 40:223f770b5341 kissfft-double tip

Try a double-precision kissfft
author Chris Cannam
date Wed, 07 Sep 2016 10:40:32 +0100
parents 26056e866c29
children
rev   line source
Chris@19 1 <html lang="en">
Chris@19 2 <head>
Chris@19 3 <title>Combining MPI and Threads - FFTW 3.3.4</title>
Chris@19 4 <meta http-equiv="Content-Type" content="text/html">
Chris@19 5 <meta name="description" content="FFTW 3.3.4">
Chris@19 6 <meta name="generator" content="makeinfo 4.13">
Chris@19 7 <link title="Top" rel="start" href="index.html#Top">
Chris@19 8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
Chris@19 9 <link rel="prev" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips" title="FFTW MPI Performance Tips">
Chris@19 10 <link rel="next" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference" title="FFTW MPI Reference">
Chris@19 11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
Chris@19 12 <!--
Chris@19 13 This manual is for FFTW
Chris@19 14 (version 3.3.4, 20 September 2013).
Chris@19 15
Chris@19 16 Copyright (C) 2003 Matteo Frigo.
Chris@19 17
Chris@19 18 Copyright (C) 2003 Massachusetts Institute of Technology.
Chris@19 19
Chris@19 20 Permission is granted to make and distribute verbatim copies of
Chris@19 21 this manual provided the copyright notice and this permission
Chris@19 22 notice are preserved on all copies.
Chris@19 23
Chris@19 24 Permission is granted to copy and distribute modified versions of
Chris@19 25 this manual under the conditions for verbatim copying, provided
Chris@19 26 that the entire resulting derived work is distributed under the
Chris@19 27 terms of a permission notice identical to this one.
Chris@19 28
Chris@19 29 Permission is granted to copy and distribute translations of this
Chris@19 30 manual into another language, under the above conditions for
Chris@19 31 modified versions, except that this permission notice may be
Chris@19 32 stated in a translation approved by the Free Software Foundation.
Chris@19 33 -->
Chris@19 34 <meta http-equiv="Content-Style-Type" content="text/css">
Chris@19 35 <style type="text/css"><!--
Chris@19 36 pre.display { font-family:inherit }
Chris@19 37 pre.format { font-family:inherit }
Chris@19 38 pre.smalldisplay { font-family:inherit; font-size:smaller }
Chris@19 39 pre.smallformat { font-family:inherit; font-size:smaller }
Chris@19 40 pre.smallexample { font-size:smaller }
Chris@19 41 pre.smalllisp { font-size:smaller }
Chris@19 42 span.sc { font-variant:small-caps }
Chris@19 43 span.roman { font-family:serif; font-weight:normal; }
Chris@19 44 span.sansserif { font-family:sans-serif; font-weight:normal; }
Chris@19 45 --></style>
Chris@19 46 </head>
Chris@19 47 <body>
Chris@19 48 <div class="node">
Chris@19 49 <a name="Combining-MPI-and-Threads"></a>
Chris@19 50 <p>
Chris@19 51 Next:&nbsp;<a rel="next" accesskey="n" href="FFTW-MPI-Reference.html#FFTW-MPI-Reference">FFTW MPI Reference</a>,
Chris@19 52 Previous:&nbsp;<a rel="previous" accesskey="p" href="FFTW-MPI-Performance-Tips.html#FFTW-MPI-Performance-Tips">FFTW MPI Performance Tips</a>,
Chris@19 53 Up:&nbsp;<a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
Chris@19 54 <hr>
Chris@19 55 </div>
Chris@19 56
Chris@19 57 <h3 class="section">6.11 Combining MPI and Threads</h3>
Chris@19 58
Chris@19 59 <p><a name="index-threads-430"></a>
Chris@19 60 In certain cases, it may be advantageous to combine MPI
Chris@19 61 (distributed-memory) and threads (shared-memory) parallelization.
Chris@19 62 FFTW supports this, with certain caveats. For example, if you have a
Chris@19 63 cluster of 4-processor shared-memory nodes, you may want to use
Chris@19 64 threads within the nodes and MPI between the nodes, instead of MPI for
Chris@19 65 all parallelization.
Chris@19 66
Chris@19 67 <p>In particular, it is possible to seamlessly combine the MPI FFTW
Chris@19 68 routines with the multi-threaded FFTW routines (see <a href="Multi_002dthreaded-FFTW.html#Multi_002dthreaded-FFTW">Multi-threaded FFTW</a>). However, some care must be taken in the initialization code,
Chris@19 69 which should look something like this:
Chris@19 70
Chris@19 71 <pre class="example"> int threads_ok;
Chris@19 72
Chris@19 73 int main(int argc, char **argv)
Chris@19 74 {
Chris@19 75 int provided;
Chris@19 76 MPI_Init_thread(&amp;argc, &amp;argv, MPI_THREAD_FUNNELED, &amp;provided);
Chris@19 77 threads_ok = provided &gt;= MPI_THREAD_FUNNELED;
Chris@19 78
Chris@19 79 if (threads_ok) threads_ok = fftw_init_threads();
Chris@19 80 fftw_mpi_init();
Chris@19 81
Chris@19 82 ...
Chris@19 83 if (threads_ok) fftw_plan_with_nthreads(...);
Chris@19 84 ...
Chris@19 85
Chris@19 86 MPI_Finalize();
Chris@19 87 }
Chris@19 88 </pre>
Chris@19 89 <p><a name="index-fftw_005fmpi_005finit-431"></a><a name="index-fftw_005finit_005fthreads-432"></a><a name="index-fftw_005fplan_005fwith_005fnthreads-433"></a>
Chris@19 90 First, note that instead of calling <code>MPI_Init</code>, you should call
Chris@19 91 <code>MPI_Init_threads</code>, which is the initialization routine defined
Chris@19 92 by the MPI-2 standard to indicate to MPI that your program will be
Chris@19 93 multithreaded. We pass <code>MPI_THREAD_FUNNELED</code>, which indicates
Chris@19 94 that we will only call MPI routines from the main thread. (FFTW will
Chris@19 95 launch additional threads internally, but the extra threads will not
Chris@19 96 call MPI code.) (You may also pass <code>MPI_THREAD_SERIALIZED</code> or
Chris@19 97 <code>MPI_THREAD_MULTIPLE</code>, which requests additional multithreading
Chris@19 98 support from the MPI implementation, but this is not required by
Chris@19 99 FFTW.) The <code>provided</code> parameter returns what level of threads
Chris@19 100 support is actually supported by your MPI implementation; this
Chris@19 101 <em>must</em> be at least <code>MPI_THREAD_FUNNELED</code> if you want to call
Chris@19 102 the FFTW threads routines, so we define a global variable
Chris@19 103 <code>threads_ok</code> to record this. You should only call
Chris@19 104 <code>fftw_init_threads</code> or <code>fftw_plan_with_nthreads</code> if
Chris@19 105 <code>threads_ok</code> is true. For more information on thread safety in
Chris@19 106 MPI, see the
Chris@19 107 <a href="http://www.mpi-forum.org/docs/mpi-20-html/node162.htm">MPI and Threads</a> section of the MPI-2 standard.
Chris@19 108 <a name="index-thread-safety-434"></a>
Chris@19 109
Chris@19 110 <p>Second, we must call <code>fftw_init_threads</code> <em>before</em>
Chris@19 111 <code>fftw_mpi_init</code>. This is critical for technical reasons having
Chris@19 112 to do with how FFTW initializes its list of algorithms.
Chris@19 113
Chris@19 114 <p>Then, if you call <code>fftw_plan_with_nthreads(N)</code>, <em>every</em> MPI
Chris@19 115 process will launch (up to) <code>N</code> threads to parallelize its transforms.
Chris@19 116
Chris@19 117 <p>For example, in the hypothetical cluster of 4-processor nodes, you
Chris@19 118 might wish to launch only a single MPI process per node, and then call
Chris@19 119 <code>fftw_plan_with_nthreads(4)</code> on each process to use all
Chris@19 120 processors in the nodes.
Chris@19 121
Chris@19 122 <p>This may or may not be faster than simply using as many MPI processes
Chris@19 123 as you have processors, however. On the one hand, using threads
Chris@19 124 within a node eliminates the need for explicit message passing within
Chris@19 125 the node. On the other hand, FFTW's transpose routines are not
Chris@19 126 multi-threaded, and this means that the communications that do take
Chris@19 127 place will not benefit from parallelization within the node.
Chris@19 128 Moreover, many MPI implementations already have optimizations to
Chris@19 129 exploit shared memory when it is available, so adding the
Chris@19 130 multithreaded FFTW on top of this may be superfluous.
Chris@19 131 <a name="index-transpose-435"></a>
Chris@19 132 <!-- -->
Chris@19 133
Chris@19 134 </body></html>
Chris@19 135