Chris@10
|
1 <html lang="en">
|
Chris@10
|
2 <head>
|
Chris@10
|
3 <title>FFTW MPI Wisdom - FFTW 3.3.3</title>
|
Chris@10
|
4 <meta http-equiv="Content-Type" content="text/html">
|
Chris@10
|
5 <meta name="description" content="FFTW 3.3.3">
|
Chris@10
|
6 <meta name="generator" content="makeinfo 4.13">
|
Chris@10
|
7 <link title="Top" rel="start" href="index.html#Top">
|
Chris@10
|
8 <link rel="up" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI" title="Distributed-memory FFTW with MPI">
|
Chris@10
|
9 <link rel="prev" href="FFTW-MPI-Transposes.html#FFTW-MPI-Transposes" title="FFTW MPI Transposes">
|
Chris@10
|
10 <link rel="next" href="Avoiding-MPI-Deadlocks.html#Avoiding-MPI-Deadlocks" title="Avoiding MPI Deadlocks">
|
Chris@10
|
11 <link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
|
Chris@10
|
12 <!--
|
Chris@10
|
13 This manual is for FFTW
|
Chris@10
|
14 (version 3.3.3, 25 November 2012).
|
Chris@10
|
15
|
Chris@10
|
16 Copyright (C) 2003 Matteo Frigo.
|
Chris@10
|
17
|
Chris@10
|
18 Copyright (C) 2003 Massachusetts Institute of Technology.
|
Chris@10
|
19
|
Chris@10
|
20 Permission is granted to make and distribute verbatim copies of
|
Chris@10
|
21 this manual provided the copyright notice and this permission
|
Chris@10
|
22 notice are preserved on all copies.
|
Chris@10
|
23
|
Chris@10
|
24 Permission is granted to copy and distribute modified versions of
|
Chris@10
|
25 this manual under the conditions for verbatim copying, provided
|
Chris@10
|
26 that the entire resulting derived work is distributed under the
|
Chris@10
|
27 terms of a permission notice identical to this one.
|
Chris@10
|
28
|
Chris@10
|
29 Permission is granted to copy and distribute translations of this
|
Chris@10
|
30 manual into another language, under the above conditions for
|
Chris@10
|
31 modified versions, except that this permission notice may be
|
Chris@10
|
32 stated in a translation approved by the Free Software Foundation.
|
Chris@10
|
33 -->
|
Chris@10
|
34 <meta http-equiv="Content-Style-Type" content="text/css">
|
Chris@10
|
35 <style type="text/css"><!--
|
Chris@10
|
36 pre.display { font-family:inherit }
|
Chris@10
|
37 pre.format { font-family:inherit }
|
Chris@10
|
38 pre.smalldisplay { font-family:inherit; font-size:smaller }
|
Chris@10
|
39 pre.smallformat { font-family:inherit; font-size:smaller }
|
Chris@10
|
40 pre.smallexample { font-size:smaller }
|
Chris@10
|
41 pre.smalllisp { font-size:smaller }
|
Chris@10
|
42 span.sc { font-variant:small-caps }
|
Chris@10
|
43 span.roman { font-family:serif; font-weight:normal; }
|
Chris@10
|
44 span.sansserif { font-family:sans-serif; font-weight:normal; }
|
Chris@10
|
45 --></style>
|
Chris@10
|
46 </head>
|
Chris@10
|
47 <body>
|
Chris@10
|
48 <div class="node">
|
Chris@10
|
49 <a name="FFTW-MPI-Wisdom"></a>
|
Chris@10
|
50 <p>
|
Chris@10
|
51 Next: <a rel="next" accesskey="n" href="Avoiding-MPI-Deadlocks.html#Avoiding-MPI-Deadlocks">Avoiding MPI Deadlocks</a>,
|
Chris@10
|
52 Previous: <a rel="previous" accesskey="p" href="FFTW-MPI-Transposes.html#FFTW-MPI-Transposes">FFTW MPI Transposes</a>,
|
Chris@10
|
53 Up: <a rel="up" accesskey="u" href="Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI">Distributed-memory FFTW with MPI</a>
|
Chris@10
|
54 <hr>
|
Chris@10
|
55 </div>
|
Chris@10
|
56
|
Chris@10
|
57 <h3 class="section">6.8 FFTW MPI Wisdom</h3>
|
Chris@10
|
58
|
Chris@10
|
59 <p><a name="index-wisdom-410"></a><a name="index-saving-plans-to-disk-411"></a>
|
Chris@10
|
60 FFTW's “wisdom” facility (see <a href="Words-of-Wisdom_002dSaving-Plans.html#Words-of-Wisdom_002dSaving-Plans">Words of Wisdom-Saving Plans</a>) can
|
Chris@10
|
61 be used to save MPI plans as well as to save uniprocessor plans.
|
Chris@10
|
62 However, for MPI there are several unavoidable complications.
|
Chris@10
|
63
|
Chris@10
|
64 <p><a name="index-MPI-I_002fO-412"></a>First, the MPI standard does not guarantee that every process can
|
Chris@10
|
65 perform file I/O (at least, not using C stdio routines)—in general,
|
Chris@10
|
66 we may only assume that process 0 is capable of I/O.<a rel="footnote" href="#fn-1" name="fnd-1"><sup>1</sup></a> So, if we
|
Chris@10
|
67 want to export the wisdom from a single process to a file, we must
|
Chris@10
|
68 first export the wisdom to a string, then send it to process 0, then
|
Chris@10
|
69 write it to a file.
|
Chris@10
|
70
|
Chris@10
|
71 <p>Second, in principle we may want to have separate wisdom for every
|
Chris@10
|
72 process, since in general the processes may run on different hardware
|
Chris@10
|
73 even for a single MPI program. However, in practice FFTW's MPI code
|
Chris@10
|
74 is designed for the case of homogeneous hardware (see <a href="Load-balancing.html#Load-balancing">Load balancing</a>), and in this case it is convenient to use the same wisdom
|
Chris@10
|
75 for every process. Thus, we need a mechanism to synchronize the wisdom.
|
Chris@10
|
76
|
Chris@10
|
77 <p>To address both of these problems, FFTW provides the following two
|
Chris@10
|
78 functions:
|
Chris@10
|
79
|
Chris@10
|
80 <pre class="example"> void fftw_mpi_broadcast_wisdom(MPI_Comm comm);
|
Chris@10
|
81 void fftw_mpi_gather_wisdom(MPI_Comm comm);
|
Chris@10
|
82 </pre>
|
Chris@10
|
83 <p><a name="index-fftw_005fmpi_005fgather_005fwisdom-413"></a><a name="index-fftw_005fmpi_005fbroadcast_005fwisdom-414"></a>
|
Chris@10
|
84 Given a communicator <code>comm</code>, <code>fftw_mpi_broadcast_wisdom</code>
|
Chris@10
|
85 will broadcast the wisdom from process 0 to all other processes.
|
Chris@10
|
86 Conversely, <code>fftw_mpi_gather_wisdom</code> will collect wisdom from all
|
Chris@10
|
87 processes onto process 0. (If the plans created for the same problem
|
Chris@10
|
88 by different processes are not the same, <code>fftw_mpi_gather_wisdom</code>
|
Chris@10
|
89 will arbitrarily choose one of the plans.) Both of these functions
|
Chris@10
|
90 may result in suboptimal plans for different processes if the
|
Chris@10
|
91 processes are running on non-identical hardware. Both of these
|
Chris@10
|
92 functions are <em>collective</em> calls, which means that they must be
|
Chris@10
|
93 executed by all processes in the communicator.
|
Chris@10
|
94 <a name="index-collective-function-415"></a>
|
Chris@10
|
95
|
Chris@10
|
96 <p>So, for example, a typical code snippet to import wisdom from a file
|
Chris@10
|
97 and use it on all processes would be:
|
Chris@10
|
98
|
Chris@10
|
99 <pre class="example"> {
|
Chris@10
|
100 int rank;
|
Chris@10
|
101
|
Chris@10
|
102 fftw_mpi_init();
|
Chris@10
|
103 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
|
Chris@10
|
104 if (rank == 0) fftw_import_wisdom_from_filename("mywisdom");
|
Chris@10
|
105 fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD);
|
Chris@10
|
106 }
|
Chris@10
|
107 </pre>
|
Chris@10
|
108 <p>(Note that we must call <code>fftw_mpi_init</code> before importing any
|
Chris@10
|
109 wisdom that might contain MPI plans.) Similarly, a typical code
|
Chris@10
|
110 snippet to export wisdom from all processes to a file is:
|
Chris@10
|
111 <a name="index-fftw_005fmpi_005finit-416"></a>
|
Chris@10
|
112 <pre class="example"> {
|
Chris@10
|
113 int rank;
|
Chris@10
|
114
|
Chris@10
|
115 fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
|
Chris@10
|
116 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
|
Chris@10
|
117 if (rank == 0) fftw_export_wisdom_to_filename("mywisdom");
|
Chris@10
|
118 }
|
Chris@10
|
119 </pre>
|
Chris@10
|
120 <!-- -->
|
Chris@10
|
121 <div class="footnote">
|
Chris@10
|
122 <hr>
|
Chris@10
|
123 <h4>Footnotes</h4><p class="footnote"><small>[<a name="fn-1" href="#fnd-1">1</a>]</small> In fact,
|
Chris@10
|
124 even this assumption is not technically guaranteed by the standard,
|
Chris@10
|
125 although it seems to be universal in actual MPI implementations and is
|
Chris@10
|
126 widely assumed by MPI-using software. Technically, you need to query
|
Chris@10
|
127 the <code>MPI_IO</code> attribute of <code>MPI_COMM_WORLD</code> with
|
Chris@10
|
128 <code>MPI_Attr_get</code>. If this attribute is <code>MPI_PROC_NULL</code>, no
|
Chris@10
|
129 I/O is possible. If it is <code>MPI_ANY_SOURCE</code>, any process can
|
Chris@10
|
130 perform I/O. Otherwise, it is the rank of a process that can perform
|
Chris@10
|
131 I/O ... but since it is not guaranteed to yield the <em>same</em> rank
|
Chris@10
|
132 on all processes, you have to do an <code>MPI_Allreduce</code> of some kind
|
Chris@10
|
133 if you want all processes to agree about which is going to do I/O.
|
Chris@10
|
134 And even then, the standard only guarantees that this process can
|
Chris@10
|
135 perform output, but not input. See e.g. <cite>Parallel Programming
|
Chris@10
|
136 with MPI</cite> by P. S. Pacheco, section 8.1.3. Needless to say, in our
|
Chris@10
|
137 experience virtually no MPI programmers worry about this.</p>
|
Chris@10
|
138
|
Chris@10
|
139 <hr></div>
|
Chris@10
|
140
|
Chris@10
|
141 </body></html>
|
Chris@10
|
142
|