Chris@10: This is fftw3.info, produced by makeinfo version 4.13 from fftw3.texi. Chris@10: Chris@10: This manual is for FFTW (version 3.3.3, 25 November 2012). Chris@10: Chris@10: Copyright (C) 2003 Matteo Frigo. Chris@10: Chris@10: Copyright (C) 2003 Massachusetts Institute of Technology. Chris@10: Chris@10: Permission is granted to make and distribute verbatim copies of Chris@10: this manual provided the copyright notice and this permission Chris@10: notice are preserved on all copies. Chris@10: Chris@10: Permission is granted to copy and distribute modified versions of Chris@10: this manual under the conditions for verbatim copying, provided Chris@10: that the entire resulting derived work is distributed under the Chris@10: terms of a permission notice identical to this one. Chris@10: Chris@10: Permission is granted to copy and distribute translations of this Chris@10: manual into another language, under the above conditions for Chris@10: modified versions, except that this permission notice may be Chris@10: stated in a translation approved by the Free Software Foundation. Chris@10: Chris@10: INFO-DIR-SECTION Texinfo documentation system Chris@10: START-INFO-DIR-ENTRY Chris@10: * fftw3: (fftw3). FFTW User's Manual. Chris@10: END-INFO-DIR-ENTRY Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Top, Next: Introduction, Prev: (dir), Up: (dir) Chris@10: Chris@10: FFTW User Manual Chris@10: **************** Chris@10: Chris@10: Welcome to FFTW, the Fastest Fourier Transform in the West. FFTW is a Chris@10: collection of fast C routines to compute the discrete Fourier transform. Chris@10: This manual documents FFTW version 3.3.3. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Introduction:: Chris@10: * Tutorial:: Chris@10: * Other Important Topics:: Chris@10: * FFTW Reference:: Chris@10: * Multi-threaded FFTW:: Chris@10: * Distributed-memory FFTW with MPI:: Chris@10: * Calling FFTW from Modern Fortran:: Chris@10: * Calling FFTW from Legacy Fortran:: Chris@10: * Upgrading from FFTW version 2:: Chris@10: * Installation and Customization:: Chris@10: * Acknowledgments:: Chris@10: * License and Copyright:: Chris@10: * Concept Index:: Chris@10: * Library Index:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Introduction, Next: Tutorial, Prev: Top, Up: Top Chris@10: Chris@10: 1 Introduction Chris@10: ************** Chris@10: Chris@10: This manual documents version 3.3.3 of FFTW, the _Fastest Fourier Chris@10: Transform in the West_. FFTW is a comprehensive collection of fast C Chris@10: routines for computing the discrete Fourier transform (DFT) and various Chris@10: special cases thereof. Chris@10: * FFTW computes the DFT of complex data, real data, even- or Chris@10: odd-symmetric real data (these symmetric transforms are usually Chris@10: known as the discrete cosine or sine transform, respectively), and Chris@10: the discrete Hartley transform (DHT) of real data. Chris@10: Chris@10: * The input data can have arbitrary length. FFTW employs O(n Chris@10: log n) algorithms for all lengths, including prime numbers. Chris@10: Chris@10: * FFTW supports arbitrary multi-dimensional data. Chris@10: Chris@10: * FFTW supports the SSE, SSE2, AVX, Altivec, and MIPS PS instruction Chris@10: sets. Chris@10: Chris@10: * FFTW includes parallel (multi-threaded) transforms for Chris@10: shared-memory systems. Chris@10: Chris@10: * Starting with version 3.3, FFTW includes distributed-memory Chris@10: parallel transforms using MPI. Chris@10: Chris@10: We assume herein that you are familiar with the properties and uses Chris@10: of the DFT that are relevant to your application. Otherwise, see e.g. Chris@10: `The Fast Fourier Transform and Its Applications' by E. O. Brigham Chris@10: (Prentice-Hall, Englewood Cliffs, NJ, 1988). Our web page Chris@10: (http://www.fftw.org) also has links to FFT-related information online. Chris@10: Chris@10: In order to use FFTW effectively, you need to learn one basic concept Chris@10: of FFTW's internal structure: FFTW does not use a fixed algorithm for Chris@10: computing the transform, but instead it adapts the DFT algorithm to Chris@10: details of the underlying hardware in order to maximize performance. Chris@10: Hence, the computation of the transform is split into two phases. Chris@10: First, FFTW's "planner" "learns" the fastest way to compute the Chris@10: transform on your machine. The planner produces a data structure Chris@10: called a "plan" that contains this information. Subsequently, the plan Chris@10: is "executed" to transform the array of input data as dictated by the Chris@10: plan. The plan can be reused as many times as needed. In typical Chris@10: high-performance applications, many transforms of the same size are Chris@10: computed and, consequently, a relatively expensive initialization of Chris@10: this sort is acceptable. On the other hand, if you need a single Chris@10: transform of a given size, the one-time cost of the planner becomes Chris@10: significant. For this case, FFTW provides fast planners based on Chris@10: heuristics or on previously computed plans. Chris@10: Chris@10: FFTW supports transforms of data with arbitrary length, rank, Chris@10: multiplicity, and a general memory layout. In simple cases, however, Chris@10: this generality may be unnecessary and confusing. Consequently, we Chris@10: organized the interface to FFTW into three levels of increasing Chris@10: generality. Chris@10: * The "basic interface" computes a single transform of Chris@10: contiguous data. Chris@10: Chris@10: * The "advanced interface" computes transforms of multiple or Chris@10: strided arrays. Chris@10: Chris@10: * The "guru interface" supports the most general data layouts, Chris@10: multiplicities, and strides. Chris@10: We expect that most users will be best served by the basic interface, Chris@10: whereas the guru interface requires careful attention to the Chris@10: documentation to avoid problems. Chris@10: Chris@10: Besides the automatic performance adaptation performed by the Chris@10: planner, it is also possible for advanced users to customize FFTW Chris@10: manually. For example, if code space is a concern, we provide a tool Chris@10: that links only the subset of FFTW needed by your application. Chris@10: Conversely, you may need to extend FFTW because the standard Chris@10: distribution is not sufficient for your needs. For example, the Chris@10: standard FFTW distribution works most efficiently for arrays whose size Chris@10: can be factored into small primes (2, 3, 5, and 7), and otherwise it Chris@10: uses a slower general-purpose routine. If you need efficient Chris@10: transforms of other sizes, you can use FFTW's code generator, which Chris@10: produces fast C programs ("codelets") for any particular array size you Chris@10: may care about. For example, if you need transforms of size 513 = 19 x Chris@10: 3^3, you can customize FFTW to support the factor 19 efficiently. Chris@10: Chris@10: For more information regarding FFTW, see the paper, "The Design and Chris@10: Implementation of FFTW3," by M. Frigo and S. G. Johnson, which was an Chris@10: invited paper in `Proc. IEEE' 93 (2), p. 216 (2005). The code Chris@10: generator is described in the paper "A fast Fourier transform compiler", by Chris@10: M. Frigo, in the `Proceedings of the 1999 ACM SIGPLAN Conference on Chris@10: Programming Language Design and Implementation (PLDI), Atlanta, Chris@10: Georgia, May 1999'. These papers, along with the latest version of Chris@10: FFTW, the FAQ, benchmarks, and other links, are available at the FFTW Chris@10: home page (http://www.fftw.org). Chris@10: Chris@10: The current version of FFTW incorporates many good ideas from the Chris@10: past thirty years of FFT literature. In one way or another, FFTW uses Chris@10: the Cooley-Tukey algorithm, the prime factor algorithm, Rader's Chris@10: algorithm for prime sizes, and a split-radix algorithm (with a Chris@10: "conjugate-pair" variation pointed out to us by Dan Bernstein). FFTW's Chris@10: code generator also produces new algorithms that we do not completely Chris@10: understand. The reader is referred to the cited papers for the Chris@10: appropriate references. Chris@10: Chris@10: The rest of this manual is organized as follows. We first discuss Chris@10: the sequential (single-processor) implementation. We start by Chris@10: describing the basic interface/features of FFTW in *note Tutorial::. Chris@10: Next, *note Other Important Topics:: discusses data alignment (*note Chris@10: SIMD alignment and fftw_malloc::), the storage scheme of Chris@10: multi-dimensional arrays (*note Multi-dimensional Array Format::), and Chris@10: FFTW's mechanism for storing plans on disk (*note Words of Chris@10: Wisdom-Saving Plans::). Next, *note FFTW Reference:: provides Chris@10: comprehensive documentation of all FFTW's features. Parallel Chris@10: transforms are discussed in their own chapters: *note Multi-threaded Chris@10: FFTW:: and *note Distributed-memory FFTW with MPI::. Fortran Chris@10: programmers can also use FFTW, as described in *note Calling FFTW from Chris@10: Legacy Fortran:: and *note Calling FFTW from Modern Fortran::. *note Chris@10: Installation and Customization:: explains how to install FFTW in your Chris@10: computer system and how to adapt FFTW to your needs. License and Chris@10: copyright information is given in *note License and Copyright::. Chris@10: Finally, we thank all the people who helped us in *note Chris@10: Acknowledgments::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Tutorial, Next: Other Important Topics, Prev: Introduction, Up: Top Chris@10: Chris@10: 2 Tutorial Chris@10: ********** Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Complex One-Dimensional DFTs:: Chris@10: * Complex Multi-Dimensional DFTs:: Chris@10: * One-Dimensional DFTs of Real Data:: Chris@10: * Multi-Dimensional DFTs of Real Data:: Chris@10: * More DFTs of Real Data:: Chris@10: Chris@10: This chapter describes the basic usage of FFTW, i.e., how to compute the Chris@10: Fourier transform of a single array. This chapter tells the truth, but Chris@10: not the _whole_ truth. Specifically, FFTW implements additional Chris@10: routines and flags that are not documented here, although in many cases Chris@10: we try to indicate where added capabilities exist. For more complete Chris@10: information, see *note FFTW Reference::. (Note that you need to Chris@10: compile and install FFTW before you can use it in a program. For the Chris@10: details of the installation, see *note Installation and Chris@10: Customization::.) Chris@10: Chris@10: We recommend that you read this tutorial in order.(1) At the least, Chris@10: read the first section (*note Complex One-Dimensional DFTs::) before Chris@10: reading any of the others, even if your main interest lies in one of Chris@10: the other transform types. Chris@10: Chris@10: Users of FFTW version 2 and earlier may also want to read *note Chris@10: Upgrading from FFTW version 2::. Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) You can read the tutorial in bit-reversed order after computing Chris@10: your first transform. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Complex One-Dimensional DFTs, Next: Complex Multi-Dimensional DFTs, Prev: Tutorial, Up: Tutorial Chris@10: Chris@10: 2.1 Complex One-Dimensional DFTs Chris@10: ================================ Chris@10: Chris@10: Plan: To bother about the best method of accomplishing an Chris@10: accidental result. [Ambrose Bierce, `The Enlarged Devil's Chris@10: Dictionary'.] Chris@10: Chris@10: The basic usage of FFTW to compute a one-dimensional DFT of size `N' Chris@10: is simple, and it typically looks something like this code: Chris@10: Chris@10: #include Chris@10: ... Chris@10: { Chris@10: fftw_complex *in, *out; Chris@10: fftw_plan p; Chris@10: ... Chris@10: in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N); Chris@10: out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N); Chris@10: p = fftw_plan_dft_1d(N, in, out, FFTW_FORWARD, FFTW_ESTIMATE); Chris@10: ... Chris@10: fftw_execute(p); /* repeat as needed */ Chris@10: ... Chris@10: fftw_destroy_plan(p); Chris@10: fftw_free(in); fftw_free(out); Chris@10: } Chris@10: Chris@10: You must link this code with the `fftw3' library. On Unix systems, Chris@10: link with `-lfftw3 -lm'. Chris@10: Chris@10: The example code first allocates the input and output arrays. You Chris@10: can allocate them in any way that you like, but we recommend using Chris@10: `fftw_malloc', which behaves like `malloc' except that it properly Chris@10: aligns the array when SIMD instructions (such as SSE and Altivec) are Chris@10: available (*note SIMD alignment and fftw_malloc::). [Alternatively, we Chris@10: provide a convenient wrapper function `fftw_alloc_complex(N)' which has Chris@10: the same effect.] Chris@10: Chris@10: The data is an array of type `fftw_complex', which is by default a Chris@10: `double[2]' composed of the real (`in[i][0]') and imaginary Chris@10: (`in[i][1]') parts of a complex number. Chris@10: Chris@10: The next step is to create a "plan", which is an object that Chris@10: contains all the data that FFTW needs to compute the FFT. This Chris@10: function creates the plan: Chris@10: Chris@10: fftw_plan fftw_plan_dft_1d(int n, fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: The first argument, `n', is the size of the transform you are trying Chris@10: to compute. The size `n' can be any positive integer, but sizes that Chris@10: are products of small factors are transformed most efficiently Chris@10: (although prime sizes still use an O(n log n) algorithm). Chris@10: Chris@10: The next two arguments are pointers to the input and output arrays of Chris@10: the transform. These pointers can be equal, indicating an "in-place" Chris@10: transform. Chris@10: Chris@10: The fourth argument, `sign', can be either `FFTW_FORWARD' (`-1') or Chris@10: `FFTW_BACKWARD' (`+1'), and indicates the direction of the transform Chris@10: you are interested in; technically, it is the sign of the exponent in Chris@10: the transform. Chris@10: Chris@10: The `flags' argument is usually either `FFTW_MEASURE' or `FFTW_ESTIMATE'. Chris@10: `FFTW_MEASURE' instructs FFTW to run and measure the execution time of Chris@10: several FFTs in order to find the best way to compute the transform of Chris@10: size `n'. This process takes some time (usually a few seconds), Chris@10: depending on your machine and on the size of the transform. Chris@10: `FFTW_ESTIMATE', on the contrary, does not run any computation and just Chris@10: builds a reasonable plan that is probably sub-optimal. In short, if Chris@10: your program performs many transforms of the same size and Chris@10: initialization time is not important, use `FFTW_MEASURE'; otherwise use Chris@10: the estimate. Chris@10: Chris@10: _You must create the plan before initializing the input_, because Chris@10: `FFTW_MEASURE' overwrites the `in'/`out' arrays. (Technically, Chris@10: `FFTW_ESTIMATE' does not touch your arrays, but you should always Chris@10: create plans first just to be sure.) Chris@10: Chris@10: Once the plan has been created, you can use it as many times as you Chris@10: like for transforms on the specified `in'/`out' arrays, computing the Chris@10: actual transforms via `fftw_execute(plan)': Chris@10: void fftw_execute(const fftw_plan plan); Chris@10: Chris@10: The DFT results are stored in-order in the array `out', with the Chris@10: zero-frequency (DC) component in `out[0]'. If `in != out', the Chris@10: transform is "out-of-place" and the input array `in' is not modified. Chris@10: Otherwise, the input array is overwritten with the transform. Chris@10: Chris@10: If you want to transform a _different_ array of the same size, you Chris@10: can create a new plan with `fftw_plan_dft_1d' and FFTW automatically Chris@10: reuses the information from the previous plan, if possible. Chris@10: Alternatively, with the "guru" interface you can apply a given plan to Chris@10: a different array, if you are careful. *Note FFTW Reference::. Chris@10: Chris@10: When you are done with the plan, you deallocate it by calling Chris@10: `fftw_destroy_plan(plan)': Chris@10: void fftw_destroy_plan(fftw_plan plan); Chris@10: If you allocate an array with `fftw_malloc()' you must deallocate it Chris@10: with `fftw_free()'. Do not use `free()' or, heaven forbid, `delete'. Chris@10: Chris@10: FFTW computes an _unnormalized_ DFT. Thus, computing a forward Chris@10: followed by a backward transform (or vice versa) results in the original Chris@10: array scaled by `n'. For the definition of the DFT, see *note What Chris@10: FFTW Really Computes::. Chris@10: Chris@10: If you have a C compiler, such as `gcc', that supports the C99 Chris@10: standard, and you `#include ' _before_ `', then Chris@10: `fftw_complex' is the native double-precision complex type and you can Chris@10: manipulate it with ordinary arithmetic. Otherwise, FFTW defines its Chris@10: own complex type, which is bit-compatible with the C99 complex type. Chris@10: *Note Complex numbers::. (The C++ `' template class may also Chris@10: be usable via a typecast.) Chris@10: Chris@10: To use single or long-double precision versions of FFTW, replace the Chris@10: `fftw_' prefix by `fftwf_' or `fftwl_' and link with `-lfftw3f' or Chris@10: `-lfftw3l', but use the _same_ `' header file. Chris@10: Chris@10: Many more flags exist besides `FFTW_MEASURE' and `FFTW_ESTIMATE'. Chris@10: For example, use `FFTW_PATIENT' if you're willing to wait even longer Chris@10: for a possibly even faster plan (*note FFTW Reference::). You can also Chris@10: save plans for future use, as described by *note Words of Wisdom-Saving Chris@10: Plans::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Complex Multi-Dimensional DFTs, Next: One-Dimensional DFTs of Real Data, Prev: Complex One-Dimensional DFTs, Up: Tutorial Chris@10: Chris@10: 2.2 Complex Multi-Dimensional DFTs Chris@10: ================================== Chris@10: Chris@10: Multi-dimensional transforms work much the same way as one-dimensional Chris@10: transforms: you allocate arrays of `fftw_complex' (preferably using Chris@10: `fftw_malloc'), create an `fftw_plan', execute it as many times as you Chris@10: want with `fftw_execute(plan)', and clean up with Chris@10: `fftw_destroy_plan(plan)' (and `fftw_free'). Chris@10: Chris@10: FFTW provides two routines for creating plans for 2d and 3d Chris@10: transforms, and one routine for creating plans of arbitrary Chris@10: dimensionality. The 2d and 3d routines have the following signature: Chris@10: fftw_plan fftw_plan_dft_2d(int n0, int n1, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: fftw_plan fftw_plan_dft_3d(int n0, int n1, int n2, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: These routines create plans for `n0' by `n1' two-dimensional (2d) Chris@10: transforms and `n0' by `n1' by `n2' 3d transforms, respectively. All Chris@10: of these transforms operate on contiguous arrays in the C-standard Chris@10: "row-major" order, so that the last dimension has the fastest-varying Chris@10: index in the array. This layout is described further in *note Chris@10: Multi-dimensional Array Format::. Chris@10: Chris@10: FFTW can also compute transforms of higher dimensionality. In order Chris@10: to avoid confusion between the various meanings of the the word Chris@10: "dimension", we use the term _rank_ to denote the number of independent Chris@10: indices in an array.(1) For example, we say that a 2d transform has Chris@10: rank 2, a 3d transform has rank 3, and so on. You can plan transforms Chris@10: of arbitrary rank by means of the following function: Chris@10: Chris@10: fftw_plan fftw_plan_dft(int rank, const int *n, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: Here, `n' is a pointer to an array `n[rank]' denoting an `n[0]' by Chris@10: `n[1]' by ... by `n[rank-1]' transform. Thus, for example, the call Chris@10: fftw_plan_dft_2d(n0, n1, in, out, sign, flags); Chris@10: is equivalent to the following code fragment: Chris@10: int n[2]; Chris@10: n[0] = n0; Chris@10: n[1] = n1; Chris@10: fftw_plan_dft(2, n, in, out, sign, flags); Chris@10: `fftw_plan_dft' is not restricted to 2d and 3d transforms, however, Chris@10: but it can plan transforms of arbitrary rank. Chris@10: Chris@10: You may have noticed that all the planner routines described so far Chris@10: have overlapping functionality. For example, you can plan a 1d or 2d Chris@10: transform by using `fftw_plan_dft' with a `rank' of `1' or `2', or even Chris@10: by calling `fftw_plan_dft_3d' with `n0' and/or `n1' equal to `1' (with Chris@10: no loss in efficiency). This pattern continues, and FFTW's planning Chris@10: routines in general form a "partial order," sequences of interfaces Chris@10: with strictly increasing generality but correspondingly greater Chris@10: complexity. Chris@10: Chris@10: `fftw_plan_dft' is the most general complex-DFT routine that we Chris@10: describe in this tutorial, but there are also the advanced and guru Chris@10: interfaces, which allow one to efficiently combine multiple/strided Chris@10: transforms into a single FFTW plan, transform a subset of a larger Chris@10: multi-dimensional array, and/or to handle more general complex-number Chris@10: formats. For more information, see *note FFTW Reference::. Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) The term "rank" is commonly used in the APL, FORTRAN, and Common Chris@10: Lisp traditions, although it is not so common in the C world. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: One-Dimensional DFTs of Real Data, Next: Multi-Dimensional DFTs of Real Data, Prev: Complex Multi-Dimensional DFTs, Up: Tutorial Chris@10: Chris@10: 2.3 One-Dimensional DFTs of Real Data Chris@10: ===================================== Chris@10: Chris@10: In many practical applications, the input data `in[i]' are purely real Chris@10: numbers, in which case the DFT output satisfies the "Hermitian" redundancy: Chris@10: `out[i]' is the conjugate of `out[n-i]'. It is possible to take Chris@10: advantage of these circumstances in order to achieve roughly a factor Chris@10: of two improvement in both speed and memory usage. Chris@10: Chris@10: In exchange for these speed and space advantages, the user sacrifices Chris@10: some of the simplicity of FFTW's complex transforms. First of all, the Chris@10: input and output arrays are of _different sizes and types_: the input Chris@10: is `n' real numbers, while the output is `n/2+1' complex numbers (the Chris@10: non-redundant outputs); this also requires slight "padding" of the Chris@10: input array for in-place transforms. Second, the inverse transform Chris@10: (complex to real) has the side-effect of _overwriting its input array_, Chris@10: by default. Neither of these inconveniences should pose a serious Chris@10: problem for users, but it is important to be aware of them. Chris@10: Chris@10: The routines to perform real-data transforms are almost the same as Chris@10: those for complex transforms: you allocate arrays of `double' and/or Chris@10: `fftw_complex' (preferably using `fftw_malloc' or Chris@10: `fftw_alloc_complex'), create an `fftw_plan', execute it as many times Chris@10: as you want with `fftw_execute(plan)', and clean up with Chris@10: `fftw_destroy_plan(plan)' (and `fftw_free'). The only differences are Chris@10: that the input (or output) is of type `double' and there are new Chris@10: routines to create the plan. In one dimension: Chris@10: Chris@10: fftw_plan fftw_plan_dft_r2c_1d(int n, double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_c2r_1d(int n, fftw_complex *in, double *out, Chris@10: unsigned flags); Chris@10: Chris@10: for the real input to complex-Hermitian output ("r2c") and Chris@10: complex-Hermitian input to real output ("c2r") transforms. Unlike the Chris@10: complex DFT planner, there is no `sign' argument. Instead, r2c DFTs Chris@10: are always `FFTW_FORWARD' and c2r DFTs are always `FFTW_BACKWARD'. (For Chris@10: single/long-double precision `fftwf' and `fftwl', `double' should be Chris@10: replaced by `float' and `long double', respectively.) Chris@10: Chris@10: Here, `n' is the "logical" size of the DFT, not necessarily the Chris@10: physical size of the array. In particular, the real (`double') array Chris@10: has `n' elements, while the complex (`fftw_complex') array has `n/2+1' Chris@10: elements (where the division is rounded down). For an in-place Chris@10: transform, `in' and `out' are aliased to the same array, which must be Chris@10: big enough to hold both; so, the real array would actually have Chris@10: `2*(n/2+1)' elements, where the elements beyond the first `n' are Chris@10: unused padding. (Note that this is very different from the concept of Chris@10: "zero-padding" a transform to a larger length, which changes the Chris@10: logical size of the DFT by actually adding new input data.) The kth Chris@10: element of the complex array is exactly the same as the kth element of Chris@10: the corresponding complex DFT. All positive `n' are supported; Chris@10: products of small factors are most efficient, but an O(n log n) Chris@10: algorithm is used even for prime sizes. Chris@10: Chris@10: As noted above, the c2r transform destroys its input array even for Chris@10: out-of-place transforms. This can be prevented, if necessary, by Chris@10: including `FFTW_PRESERVE_INPUT' in the `flags', with unfortunately some Chris@10: sacrifice in performance. This flag is also not currently supported Chris@10: for multi-dimensional real DFTs (next section). Chris@10: Chris@10: Readers familiar with DFTs of real data will recall that the 0th (the Chris@10: "DC") and `n/2'-th (the "Nyquist" frequency, when `n' is even) elements Chris@10: of the complex output are purely real. Some implementations therefore Chris@10: store the Nyquist element where the DC imaginary part would go, in Chris@10: order to make the input and output arrays the same size. Such packing, Chris@10: however, does not generalize well to multi-dimensional transforms, and Chris@10: the space savings are miniscule in any case; FFTW does not support it. Chris@10: Chris@10: An alternative interface for one-dimensional r2c and c2r DFTs can be Chris@10: found in the `r2r' interface (*note The Halfcomplex-format DFT::), with Chris@10: "halfcomplex"-format output that _is_ the same size (and type) as the Chris@10: input array. That interface, although it is not very useful for Chris@10: multi-dimensional transforms, may sometimes yield better performance. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Multi-Dimensional DFTs of Real Data, Next: More DFTs of Real Data, Prev: One-Dimensional DFTs of Real Data, Up: Tutorial Chris@10: Chris@10: 2.4 Multi-Dimensional DFTs of Real Data Chris@10: ======================================= Chris@10: Chris@10: Multi-dimensional DFTs of real data use the following planner routines: Chris@10: Chris@10: fftw_plan fftw_plan_dft_r2c_2d(int n0, int n1, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_r2c_3d(int n0, int n1, int n2, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_r2c(int rank, const int *n, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: Chris@10: as well as the corresponding `c2r' routines with the input/output Chris@10: types swapped. These routines work similarly to their complex Chris@10: analogues, except for the fact that here the complex output array is cut Chris@10: roughly in half and the real array requires padding for in-place Chris@10: transforms (as in 1d, above). Chris@10: Chris@10: As before, `n' is the logical size of the array, and the Chris@10: consequences of this on the the format of the complex arrays deserve Chris@10: careful attention. Suppose that the real data has dimensions n[0] x Chris@10: n[1] x n[2] x ... x n[d-1] (in row-major order). Then, after an r2c Chris@10: transform, the output is an n[0] x n[1] x n[2] x ... x (n[d-1]/2 + 1) Chris@10: array of `fftw_complex' values in row-major order, corresponding to Chris@10: slightly over half of the output of the corresponding complex DFT. Chris@10: (The division is rounded down.) The ordering of the data is otherwise Chris@10: exactly the same as in the complex-DFT case. Chris@10: Chris@10: For out-of-place transforms, this is the end of the story: the real Chris@10: data is stored as a row-major array of size n[0] x n[1] x n[2] x ... x Chris@10: n[d-1] and the complex data is stored as a row-major array of size Chris@10: n[0] x n[1] x n[2] x ... x (n[d-1]/2 + 1) . Chris@10: Chris@10: For in-place transforms, however, extra padding of the real-data Chris@10: array is necessary because the complex array is larger than the real Chris@10: array, and the two arrays share the same memory locations. Thus, for Chris@10: in-place transforms, the final dimension of the real-data array must be Chris@10: padded with extra values to accommodate the size of the complex Chris@10: data--two values if the last dimension is even and one if it is odd. That Chris@10: is, the last dimension of the real data must physically contain 2 * Chris@10: (n[d-1]/2+1) `double' values (exactly enough to hold the complex data). Chris@10: This physical array size does not, however, change the _logical_ array Chris@10: size--only n[d-1] values are actually stored in the last dimension, and Chris@10: n[d-1] is the last dimension passed to the plan-creation routine. Chris@10: Chris@10: For example, consider the transform of a two-dimensional real array Chris@10: of size `n0' by `n1'. The output of the r2c transform is a Chris@10: two-dimensional complex array of size `n0' by `n1/2+1', where the `y' Chris@10: dimension has been cut nearly in half because of redundancies in the Chris@10: output. Because `fftw_complex' is twice the size of `double', the Chris@10: output array is slightly bigger than the input array. Thus, if we want Chris@10: to compute the transform in place, we must _pad_ the input array so Chris@10: that it is of size `n0' by `2*(n1/2+1)'. If `n1' is even, then there Chris@10: are two padding elements at the end of each row (which need not be Chris@10: initialized, as they are only used for output). Chris@10: Chris@10: These transforms are unnormalized, so an r2c followed by a c2r Chris@10: transform (or vice versa) will result in the original data scaled by Chris@10: the number of real data elements--that is, the product of the (logical) Chris@10: dimensions of the real data. Chris@10: Chris@10: (Because the last dimension is treated specially, if it is equal to Chris@10: `1' the transform is _not_ equivalent to a lower-dimensional r2c/c2r Chris@10: transform. In that case, the last complex dimension also has size `1' Chris@10: (`=1/2+1'), and no advantage is gained over the complex transforms.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: More DFTs of Real Data, Prev: Multi-Dimensional DFTs of Real Data, Up: Tutorial Chris@10: Chris@10: 2.5 More DFTs of Real Data Chris@10: ========================== Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * The Halfcomplex-format DFT:: Chris@10: * Real even/odd DFTs (cosine/sine transforms):: Chris@10: * The Discrete Hartley Transform:: Chris@10: Chris@10: FFTW supports several other transform types via a unified "r2r" Chris@10: (real-to-real) interface, so called because it takes a real (`double') Chris@10: array and outputs a real array of the same size. These r2r transforms Chris@10: currently fall into three categories: DFTs of real input and Chris@10: complex-Hermitian output in halfcomplex format, DFTs of real input with Chris@10: even/odd symmetry (a.k.a. discrete cosine/sine transforms, DCTs/DSTs), Chris@10: and discrete Hartley transforms (DHTs), all described in more detail by Chris@10: the following sections. Chris@10: Chris@10: The r2r transforms follow the by now familiar interface of creating Chris@10: an `fftw_plan', executing it with `fftw_execute(plan)', and destroying Chris@10: it with `fftw_destroy_plan(plan)'. Furthermore, all r2r transforms Chris@10: share the same planner interface: Chris@10: Chris@10: fftw_plan fftw_plan_r2r_1d(int n, double *in, double *out, Chris@10: fftw_r2r_kind kind, unsigned flags); Chris@10: fftw_plan fftw_plan_r2r_2d(int n0, int n1, double *in, double *out, Chris@10: fftw_r2r_kind kind0, fftw_r2r_kind kind1, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_r2r_3d(int n0, int n1, int n2, Chris@10: double *in, double *out, Chris@10: fftw_r2r_kind kind0, Chris@10: fftw_r2r_kind kind1, Chris@10: fftw_r2r_kind kind2, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_r2r(int rank, const int *n, double *in, double *out, Chris@10: const fftw_r2r_kind *kind, unsigned flags); Chris@10: Chris@10: Just as for the complex DFT, these plan 1d/2d/3d/multi-dimensional Chris@10: transforms for contiguous arrays in row-major order, transforming (real) Chris@10: input to output of the same size, where `n' specifies the _physical_ Chris@10: dimensions of the arrays. All positive `n' are supported (with the Chris@10: exception of `n=1' for the `FFTW_REDFT00' kind, noted in the real-even Chris@10: subsection below); products of small factors are most efficient Chris@10: (factorizing `n-1' and `n+1' for `FFTW_REDFT00' and `FFTW_RODFT00' Chris@10: kinds, described below), but an O(n log n) algorithm is used even for Chris@10: prime sizes. Chris@10: Chris@10: Each dimension has a "kind" parameter, of type `fftw_r2r_kind', Chris@10: specifying the kind of r2r transform to be used for that dimension. (In Chris@10: the case of `fftw_plan_r2r', this is an array `kind[rank]' where Chris@10: `kind[i]' is the transform kind for the dimension `n[i]'.) The kind Chris@10: can be one of a set of predefined constants, defined in the following Chris@10: subsections. Chris@10: Chris@10: In other words, FFTW computes the separable product of the specified Chris@10: r2r transforms over each dimension, which can be used e.g. for partial Chris@10: differential equations with mixed boundary conditions. (For some r2r Chris@10: kinds, notably the halfcomplex DFT and the DHT, such a separable Chris@10: product is somewhat problematic in more than one dimension, however, as Chris@10: is described below.) Chris@10: Chris@10: In the current version of FFTW, all r2r transforms except for the Chris@10: halfcomplex type are computed via pre- or post-processing of Chris@10: halfcomplex transforms, and they are therefore not as fast as they Chris@10: could be. Since most other general DCT/DST codes employ a similar Chris@10: algorithm, however, FFTW's implementation should provide at least Chris@10: competitive performance. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: The Halfcomplex-format DFT, Next: Real even/odd DFTs (cosine/sine transforms), Prev: More DFTs of Real Data, Up: More DFTs of Real Data Chris@10: Chris@10: 2.5.1 The Halfcomplex-format DFT Chris@10: -------------------------------- Chris@10: Chris@10: An r2r kind of `FFTW_R2HC' ("r2hc") corresponds to an r2c DFT (*note Chris@10: One-Dimensional DFTs of Real Data::) but with "halfcomplex" format Chris@10: output, and may sometimes be faster and/or more convenient than the Chris@10: latter. The inverse "hc2r" transform is of kind `FFTW_HC2R'. This Chris@10: consists of the non-redundant half of the complex output for a 1d Chris@10: real-input DFT of size `n', stored as a sequence of `n' real numbers Chris@10: (`double') in the format: Chris@10: Chris@10: r0, r1, r2, r(n/2), i((n+1)/2-1), ..., i2, i1 Chris@10: Chris@10: Here, rk is the real part of the kth output, and ik is the imaginary Chris@10: part. (Division by 2 is rounded down.) For a halfcomplex array Chris@10: `hc[n]', the kth component thus has its real part in `hc[k]' and its Chris@10: imaginary part in `hc[n-k]', with the exception of `k' `==' `0' or Chris@10: `n/2' (the latter only if `n' is even)--in these two cases, the Chris@10: imaginary part is zero due to symmetries of the real-input DFT, and is Chris@10: not stored. Thus, the r2hc transform of `n' real values is a Chris@10: halfcomplex array of length `n', and vice versa for hc2r. Chris@10: Chris@10: Aside from the differing format, the output of Chris@10: `FFTW_R2HC'/`FFTW_HC2R' is otherwise exactly the same as for the Chris@10: corresponding 1d r2c/c2r transform (i.e. `FFTW_FORWARD'/`FFTW_BACKWARD' Chris@10: transforms, respectively). Recall that these transforms are Chris@10: unnormalized, so r2hc followed by hc2r will result in the original data Chris@10: multiplied by `n'. Furthermore, like the c2r transform, an Chris@10: out-of-place hc2r transform will _destroy its input_ array. Chris@10: Chris@10: Although these halfcomplex transforms can be used with the Chris@10: multi-dimensional r2r interface, the interpretation of such a separable Chris@10: product of transforms along each dimension is problematic. For example, Chris@10: consider a two-dimensional `n0' by `n1', r2hc by r2hc transform planned Chris@10: by `fftw_plan_r2r_2d(n0, n1, in, out, FFTW_R2HC, FFTW_R2HC, Chris@10: FFTW_MEASURE)'. Conceptually, FFTW first transforms the rows (of size Chris@10: `n1') to produce halfcomplex rows, and then transforms the columns (of Chris@10: size `n0'). Half of these column transforms, however, are of imaginary Chris@10: parts, and should therefore be multiplied by i and combined with the Chris@10: r2hc transforms of the real columns to produce the 2d DFT amplitudes; Chris@10: FFTW's r2r transform does _not_ perform this combination for you. Chris@10: Thus, if a multi-dimensional real-input/output DFT is required, we Chris@10: recommend using the ordinary r2c/c2r interface (*note Multi-Dimensional Chris@10: DFTs of Real Data::). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Real even/odd DFTs (cosine/sine transforms), Next: The Discrete Hartley Transform, Prev: The Halfcomplex-format DFT, Up: More DFTs of Real Data Chris@10: Chris@10: 2.5.2 Real even/odd DFTs (cosine/sine transforms) Chris@10: ------------------------------------------------- Chris@10: Chris@10: The Fourier transform of a real-even function f(-x) = f(x) is Chris@10: real-even, and i times the Fourier transform of a real-odd function Chris@10: f(-x) = -f(x) is real-odd. Similar results hold for a discrete Fourier Chris@10: transform, and thus for these symmetries the need for complex Chris@10: inputs/outputs is entirely eliminated. Moreover, one gains a factor of Chris@10: two in speed/space from the fact that the data are real, and an Chris@10: additional factor of two from the even/odd symmetry: only the Chris@10: non-redundant (first) half of the array need be stored. The result is Chris@10: the real-even DFT ("REDFT") and the real-odd DFT ("RODFT"), also known Chris@10: as the discrete cosine and sine transforms ("DCT" and "DST"), Chris@10: respectively. Chris@10: Chris@10: (In this section, we describe the 1d transforms; multi-dimensional Chris@10: transforms are just a separable product of these transforms operating Chris@10: along each dimension.) Chris@10: Chris@10: Because of the discrete sampling, one has an additional choice: is Chris@10: the data even/odd around a sampling point, or around the point halfway Chris@10: between two samples? The latter corresponds to _shifting_ the samples Chris@10: by _half_ an interval, and gives rise to several transform variants Chris@10: denoted by REDFTab and RODFTab: a and b are 0 or 1, and indicate Chris@10: whether the input (a) and/or output (b) are shifted by half a sample (1 Chris@10: means it is shifted). These are also known as types I-IV of the DCT Chris@10: and DST, and all four types are supported by FFTW's r2r interface.(1) Chris@10: Chris@10: The r2r kinds for the various REDFT and RODFT types supported by Chris@10: FFTW, along with the boundary conditions at both ends of the _input_ Chris@10: array (`n' real numbers `in[j=0..n-1]'), are: Chris@10: Chris@10: * `FFTW_REDFT00' (DCT-I): even around j=0 and even around j=n-1. Chris@10: Chris@10: * `FFTW_REDFT10' (DCT-II, "the" DCT): even around j=-0.5 and even Chris@10: around j=n-0.5. Chris@10: Chris@10: * `FFTW_REDFT01' (DCT-III, "the" IDCT): even around j=0 and odd Chris@10: around j=n. Chris@10: Chris@10: * `FFTW_REDFT11' (DCT-IV): even around j=-0.5 and odd around j=n-0.5. Chris@10: Chris@10: * `FFTW_RODFT00' (DST-I): odd around j=-1 and odd around j=n. Chris@10: Chris@10: * `FFTW_RODFT10' (DST-II): odd around j=-0.5 and odd around j=n-0.5. Chris@10: Chris@10: * `FFTW_RODFT01' (DST-III): odd around j=-1 and even around j=n-1. Chris@10: Chris@10: * `FFTW_RODFT11' (DST-IV): odd around j=-0.5 and even around j=n-0.5. Chris@10: Chris@10: Chris@10: Note that these symmetries apply to the "logical" array being Chris@10: transformed; *there are no constraints on your physical input data*. Chris@10: So, for example, if you specify a size-5 REDFT00 (DCT-I) of the data Chris@10: abcde, it corresponds to the DFT of the logical even array abcdedcb of Chris@10: size 8. A size-4 REDFT10 (DCT-II) of the data abcd corresponds to the Chris@10: size-8 logical DFT of the even array abcddcba, shifted by half a sample. Chris@10: Chris@10: All of these transforms are invertible. The inverse of R*DFT00 is Chris@10: R*DFT00; of R*DFT10 is R*DFT01 and vice versa (these are often called Chris@10: simply "the" DCT and IDCT, respectively); and of R*DFT11 is R*DFT11. Chris@10: However, the transforms computed by FFTW are unnormalized, exactly like Chris@10: the corresponding real and complex DFTs, so computing a transform Chris@10: followed by its inverse yields the original array scaled by N, where N Chris@10: is the _logical_ DFT size. For REDFT00, N=2(n-1); for RODFT00, Chris@10: N=2(n+1); otherwise, N=2n. Chris@10: Chris@10: Note that the boundary conditions of the transform output array are Chris@10: given by the input boundary conditions of the inverse transform. Thus, Chris@10: the above transforms are all inequivalent in terms of input/output Chris@10: boundary conditions, even neglecting the 0.5 shift difference. Chris@10: Chris@10: FFTW is most efficient when N is a product of small factors; note Chris@10: that this _differs_ from the factorization of the physical size `n' for Chris@10: REDFT00 and RODFT00! There is another oddity: `n=1' REDFT00 transforms Chris@10: correspond to N=0, and so are _not defined_ (the planner will return Chris@10: `NULL'). Otherwise, any positive `n' is supported. Chris@10: Chris@10: For the precise mathematical definitions of these transforms as used Chris@10: by FFTW, see *note What FFTW Really Computes::. (For people accustomed Chris@10: to the DCT/DST, FFTW's definitions have a coefficient of 2 in front of Chris@10: the cos/sin functions so that they correspond precisely to an even/odd Chris@10: DFT of size N. Some authors also include additional multiplicative Chris@10: factors of sqrt(2) for selected inputs and outputs; this makes the Chris@10: transform orthogonal, but sacrifices the direct equivalence to a Chris@10: symmetric DFT.) Chris@10: Chris@10: Which type do you need? Chris@10: ....................... Chris@10: Chris@10: Since the required flavor of even/odd DFT depends upon your problem, Chris@10: you are the best judge of this choice, but we can make a few comments Chris@10: on relative efficiency to help you in your selection. In particular, Chris@10: R*DFT01 and R*DFT10 tend to be slightly faster than R*DFT11 (especially Chris@10: for odd sizes), while the R*DFT00 transforms are sometimes Chris@10: significantly slower (especially for even sizes).(2) Chris@10: Chris@10: Thus, if only the boundary conditions on the transform inputs are Chris@10: specified, we generally recommend R*DFT10 over R*DFT00 and R*DFT01 over Chris@10: R*DFT11 (unless the half-sample shift or the self-inverse property is Chris@10: significant for your problem). Chris@10: Chris@10: If performance is important to you and you are using only small sizes Chris@10: (say n<200), e.g. for multi-dimensional transforms, then you might Chris@10: consider generating hard-coded transforms of those sizes and types that Chris@10: you are interested in (*note Generating your own code::). Chris@10: Chris@10: We are interested in hearing what types of symmetric transforms you Chris@10: find most useful. Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) There are also type V-VIII transforms, which correspond to a Chris@10: logical DFT of _odd_ size N, independent of whether the physical size Chris@10: `n' is odd, but we do not support these variants. Chris@10: Chris@10: (2) R*DFT00 is sometimes slower in FFTW because we discovered that Chris@10: the standard algorithm for computing this by a pre/post-processed real Chris@10: DFT--the algorithm used in FFTPACK, Numerical Recipes, and other Chris@10: sources for decades now--has serious numerical problems: it already Chris@10: loses several decimal places of accuracy for 16k sizes. There seem to Chris@10: be only two alternatives in the literature that do not suffer Chris@10: similarly: a recursive decomposition into smaller DCTs, which would Chris@10: require a large set of codelets for efficiency and generality, or Chris@10: sacrificing a factor of 2 in speed to use a real DFT of twice the size. Chris@10: We currently employ the latter technique for general n, as well as a Chris@10: limited form of the former method: a split-radix decomposition when n Chris@10: is odd (N a multiple of 4). For N containing many factors of 2, the Chris@10: split-radix method seems to recover most of the speed of the standard Chris@10: algorithm without the accuracy tradeoff. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: The Discrete Hartley Transform, Prev: Real even/odd DFTs (cosine/sine transforms), Up: More DFTs of Real Data Chris@10: Chris@10: 2.5.3 The Discrete Hartley Transform Chris@10: ------------------------------------ Chris@10: Chris@10: If you are planning to use the DHT because you've heard that it is Chris@10: "faster" than the DFT (FFT), *stop here*. The DHT is not faster than Chris@10: the DFT. That story is an old but enduring misconception that was Chris@10: debunked in 1987. Chris@10: Chris@10: The discrete Hartley transform (DHT) is an invertible linear Chris@10: transform closely related to the DFT. In the DFT, one multiplies each Chris@10: input by cos - i * sin (a complex exponential), whereas in the DHT each Chris@10: input is multiplied by simply cos + sin. Thus, the DHT transforms `n' Chris@10: real numbers to `n' real numbers, and has the convenient property of Chris@10: being its own inverse. In FFTW, a DHT (of any positive `n') can be Chris@10: specified by an r2r kind of `FFTW_DHT'. Chris@10: Chris@10: Like the DFT, in FFTW the DHT is unnormalized, so computing a DHT of Chris@10: size `n' followed by another DHT of the same size will result in the Chris@10: original array multiplied by `n'. Chris@10: Chris@10: The DHT was originally proposed as a more efficient alternative to Chris@10: the DFT for real data, but it was subsequently shown that a specialized Chris@10: DFT (such as FFTW's r2hc or r2c transforms) could be just as fast. In Chris@10: FFTW, the DHT is actually computed by post-processing an r2hc Chris@10: transform, so there is ordinarily no reason to prefer it from a Chris@10: performance perspective.(1) However, we have heard rumors that the DHT Chris@10: might be the most appropriate transform in its own right for certain Chris@10: applications, and we would be very interested to hear from anyone who Chris@10: finds it useful. Chris@10: Chris@10: If `FFTW_DHT' is specified for multiple dimensions of a Chris@10: multi-dimensional transform, FFTW computes the separable product of 1d Chris@10: DHTs along each dimension. Unfortunately, this is not quite the same Chris@10: thing as a true multi-dimensional DHT; you can compute the latter, if Chris@10: necessary, with at most `rank-1' post-processing passes [see e.g. H. Chris@10: Hao and R. N. Bracewell, Proc. IEEE 75, 264-266 (1987)]. Chris@10: Chris@10: For the precise mathematical definition of the DHT as used by FFTW, Chris@10: see *note What FFTW Really Computes::. Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) We provide the DHT mainly as a byproduct of some internal Chris@10: algorithms. FFTW computes a real input/output DFT of _prime_ size by Chris@10: re-expressing it as a DHT plus post/pre-processing and then using Chris@10: Rader's prime-DFT algorithm adapted to the DHT. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Other Important Topics, Next: FFTW Reference, Prev: Tutorial, Up: Top Chris@10: Chris@10: 3 Other Important Topics Chris@10: ************************ Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * SIMD alignment and fftw_malloc:: Chris@10: * Multi-dimensional Array Format:: Chris@10: * Words of Wisdom-Saving Plans:: Chris@10: * Caveats in Using Wisdom:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: SIMD alignment and fftw_malloc, Next: Multi-dimensional Array Format, Prev: Other Important Topics, Up: Other Important Topics Chris@10: Chris@10: 3.1 SIMD alignment and fftw_malloc Chris@10: ================================== Chris@10: Chris@10: SIMD, which stands for "Single Instruction Multiple Data," is a set of Chris@10: special operations supported by some processors to perform a single Chris@10: operation on several numbers (usually 2 or 4) simultaneously. SIMD Chris@10: floating-point instructions are available on several popular CPUs: Chris@10: SSE/SSE2/AVX on recent x86/x86-64 processors, AltiVec (single precision) Chris@10: on some PowerPCs (Apple G4 and higher), NEON on some ARM models, and Chris@10: MIPS Paired Single (currently only in FFTW 3.2.x). FFTW can be Chris@10: compiled to support the SIMD instructions on any of these systems. Chris@10: Chris@10: A program linking to an FFTW library compiled with SIMD support can Chris@10: obtain a nonnegligible speedup for most complex and r2c/c2r transforms. Chris@10: In order to obtain this speedup, however, the arrays of complex (or Chris@10: real) data passed to FFTW must be specially aligned in memory Chris@10: (typically 16-byte aligned), and often this alignment is more stringent Chris@10: than that provided by the usual `malloc' (etc.) allocation routines. Chris@10: Chris@10: In order to guarantee proper alignment for SIMD, therefore, in case Chris@10: your program is ever linked against a SIMD-using FFTW, we recommend Chris@10: allocating your transform data with `fftw_malloc' and de-allocating it Chris@10: with `fftw_free'. These have exactly the same interface and behavior as Chris@10: `malloc'/`free', except that for a SIMD FFTW they ensure that the Chris@10: returned pointer has the necessary alignment (by calling `memalign' or Chris@10: its equivalent on your OS). Chris@10: Chris@10: You are not _required_ to use `fftw_malloc'. You can allocate your Chris@10: data in any way that you like, from `malloc' to `new' (in C++) to a Chris@10: fixed-size array declaration. If the array happens not to be properly Chris@10: aligned, FFTW will not use the SIMD extensions. Chris@10: Chris@10: Since `fftw_malloc' only ever needs to be used for real and complex Chris@10: arrays, we provide two convenient wrapper routines `fftw_alloc_real(N)' Chris@10: and `fftw_alloc_complex(N)' that are equivalent to Chris@10: `(double*)fftw_malloc(sizeof(double) * N)' and Chris@10: `(fftw_complex*)fftw_malloc(sizeof(fftw_complex) * N)', respectively Chris@10: (or their equivalents in other precisions). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Multi-dimensional Array Format, Next: Words of Wisdom-Saving Plans, Prev: SIMD alignment and fftw_malloc, Up: Other Important Topics Chris@10: Chris@10: 3.2 Multi-dimensional Array Format Chris@10: ================================== Chris@10: Chris@10: This section describes the format in which multi-dimensional arrays are Chris@10: stored in FFTW. We felt that a detailed discussion of this topic was Chris@10: necessary. Since several different formats are common, this topic is Chris@10: often a source of confusion. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Row-major Format:: Chris@10: * Column-major Format:: Chris@10: * Fixed-size Arrays in C:: Chris@10: * Dynamic Arrays in C:: Chris@10: * Dynamic Arrays in C-The Wrong Way:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Row-major Format, Next: Column-major Format, Prev: Multi-dimensional Array Format, Up: Multi-dimensional Array Format Chris@10: Chris@10: 3.2.1 Row-major Format Chris@10: ---------------------- Chris@10: Chris@10: The multi-dimensional arrays passed to `fftw_plan_dft' etcetera are Chris@10: expected to be stored as a single contiguous block in "row-major" order Chris@10: (sometimes called "C order"). Basically, this means that as you step Chris@10: through adjacent memory locations, the first dimension's index varies Chris@10: most slowly and the last dimension's index varies most quickly. Chris@10: Chris@10: To be more explicit, let us consider an array of rank d whose Chris@10: dimensions are n[0] x n[1] x n[2] x ... x n[d-1] . Now, we specify a Chris@10: location in the array by a sequence of d (zero-based) indices, one for Chris@10: each dimension: (i[0], i[1], ..., i[d-1]). If the array is stored in Chris@10: row-major order, then this element is located at the position i[d-1] + Chris@10: n[d-1] * (i[d-2] + n[d-2] * (... + n[1] * i[0])). Chris@10: Chris@10: Note that, for the ordinary complex DFT, each element of the array Chris@10: must be of type `fftw_complex'; i.e. a (real, imaginary) pair of Chris@10: (double-precision) numbers. Chris@10: Chris@10: In the advanced FFTW interface, the physical dimensions n from which Chris@10: the indices are computed can be different from (larger than) the Chris@10: logical dimensions of the transform to be computed, in order to Chris@10: transform a subset of a larger array. Note also that, in the advanced Chris@10: interface, the expression above is multiplied by a "stride" to get the Chris@10: actual array index--this is useful in situations where each element of Chris@10: the multi-dimensional array is actually a data structure (or another Chris@10: array), and you just want to transform a single field. In the basic Chris@10: interface, however, the stride is 1. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Column-major Format, Next: Fixed-size Arrays in C, Prev: Row-major Format, Up: Multi-dimensional Array Format Chris@10: Chris@10: 3.2.2 Column-major Format Chris@10: ------------------------- Chris@10: Chris@10: Readers from the Fortran world are used to arrays stored in Chris@10: "column-major" order (sometimes called "Fortran order"). This is Chris@10: essentially the exact opposite of row-major order in that, here, the Chris@10: _first_ dimension's index varies most quickly. Chris@10: Chris@10: If you have an array stored in column-major order and wish to Chris@10: transform it using FFTW, it is quite easy to do. When creating the Chris@10: plan, simply pass the dimensions of the array to the planner in Chris@10: _reverse order_. For example, if your array is a rank three `N x M x Chris@10: L' matrix in column-major order, you should pass the dimensions of the Chris@10: array as if it were an `L x M x N' matrix (which it is, from the Chris@10: perspective of FFTW). This is done for you _automatically_ by the FFTW Chris@10: legacy-Fortran interface (*note Calling FFTW from Legacy Fortran::), Chris@10: but you must do it manually with the modern Fortran interface (*note Chris@10: Reversing array dimensions::). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Fixed-size Arrays in C, Next: Dynamic Arrays in C, Prev: Column-major Format, Up: Multi-dimensional Array Format Chris@10: Chris@10: 3.2.3 Fixed-size Arrays in C Chris@10: ---------------------------- Chris@10: Chris@10: A multi-dimensional array whose size is declared at compile time in C Chris@10: is _already_ in row-major order. You don't have to do anything special Chris@10: to transform it. For example: Chris@10: Chris@10: { Chris@10: fftw_complex data[N0][N1][N2]; Chris@10: fftw_plan plan; Chris@10: ... Chris@10: plan = fftw_plan_dft_3d(N0, N1, N2, &data[0][0][0], &data[0][0][0], Chris@10: FFTW_FORWARD, FFTW_ESTIMATE); Chris@10: ... Chris@10: } Chris@10: Chris@10: This will plan a 3d in-place transform of size `N0 x N1 x N2'. Chris@10: Notice how we took the address of the zero-th element to pass to the Chris@10: planner (we could also have used a typecast). Chris@10: Chris@10: However, we tend to _discourage_ users from declaring their arrays Chris@10: in this way, for two reasons. First, this allocates the array on the Chris@10: stack ("automatic" storage), which has a very limited size on most Chris@10: operating systems (declaring an array with more than a few thousand Chris@10: elements will often cause a crash). (You can get around this Chris@10: limitation on many systems by declaring the array as `static' and/or Chris@10: global, but that has its own drawbacks.) Second, it may not optimally Chris@10: align the array for use with a SIMD FFTW (*note SIMD alignment and Chris@10: fftw_malloc::). Instead, we recommend using `fftw_malloc', as Chris@10: described below. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Dynamic Arrays in C, Next: Dynamic Arrays in C-The Wrong Way, Prev: Fixed-size Arrays in C, Up: Multi-dimensional Array Format Chris@10: Chris@10: 3.2.4 Dynamic Arrays in C Chris@10: ------------------------- Chris@10: Chris@10: We recommend allocating most arrays dynamically, with `fftw_malloc'. Chris@10: This isn't too hard to do, although it is not as straightforward for Chris@10: multi-dimensional arrays as it is for one-dimensional arrays. Chris@10: Chris@10: Creating the array is simple: using a dynamic-allocation routine like Chris@10: `fftw_malloc', allocate an array big enough to store N `fftw_complex' Chris@10: values (for a complex DFT), where N is the product of the sizes of the Chris@10: array dimensions (i.e. the total number of complex values in the Chris@10: array). For example, here is code to allocate a 5 x 12 x 27 rank-3 Chris@10: array: Chris@10: Chris@10: fftw_complex *an_array; Chris@10: an_array = (fftw_complex*) fftw_malloc(5*12*27 * sizeof(fftw_complex)); Chris@10: Chris@10: Accessing the array elements, however, is more tricky--you can't Chris@10: simply use multiple applications of the `[]' operator like you could Chris@10: for fixed-size arrays. Instead, you have to explicitly compute the Chris@10: offset into the array using the formula given earlier for row-major Chris@10: arrays. For example, to reference the (i,j,k)-th element of the array Chris@10: allocated above, you would use the expression `an_array[k + 27 * (j + Chris@10: 12 * i)]'. Chris@10: Chris@10: This pain can be alleviated somewhat by defining appropriate macros, Chris@10: or, in C++, creating a class and overloading the `()' operator. The Chris@10: recent C99 standard provides a way to reinterpret the dynamic array as Chris@10: a "variable-length" multi-dimensional array amenable to `[]', but this Chris@10: feature is not yet widely supported by compilers. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Dynamic Arrays in C-The Wrong Way, Prev: Dynamic Arrays in C, Up: Multi-dimensional Array Format Chris@10: Chris@10: 3.2.5 Dynamic Arrays in C--The Wrong Way Chris@10: ---------------------------------------- Chris@10: Chris@10: A different method for allocating multi-dimensional arrays in C is Chris@10: often suggested that is incompatible with FFTW: _using it will cause Chris@10: FFTW to die a painful death_. We discuss the technique here, however, Chris@10: because it is so commonly known and used. This method is to create Chris@10: arrays of pointers of arrays of pointers of ...etcetera. For example, Chris@10: the analogue in this method to the example above is: Chris@10: Chris@10: int i,j; Chris@10: fftw_complex ***a_bad_array; /* another way to make a 5x12x27 array */ Chris@10: Chris@10: a_bad_array = (fftw_complex ***) malloc(5 * sizeof(fftw_complex **)); Chris@10: for (i = 0; i < 5; ++i) { Chris@10: a_bad_array[i] = Chris@10: (fftw_complex **) malloc(12 * sizeof(fftw_complex *)); Chris@10: for (j = 0; j < 12; ++j) Chris@10: a_bad_array[i][j] = Chris@10: (fftw_complex *) malloc(27 * sizeof(fftw_complex)); Chris@10: } Chris@10: Chris@10: As you can see, this sort of array is inconvenient to allocate (and Chris@10: deallocate). On the other hand, it has the advantage that the Chris@10: (i,j,k)-th element can be referenced simply by `a_bad_array[i][j][k]'. Chris@10: Chris@10: If you like this technique and want to maximize convenience in Chris@10: accessing the array, but still want to pass the array to FFTW, you can Chris@10: use a hybrid method. Allocate the array as one contiguous block, but Chris@10: also declare an array of arrays of pointers that point to appropriate Chris@10: places in the block. That sort of trick is beyond the scope of this Chris@10: documentation; for more information on multi-dimensional arrays in C, Chris@10: see the `comp.lang.c' FAQ (http://c-faq.com/aryptr/dynmuldimary.html). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Words of Wisdom-Saving Plans, Next: Caveats in Using Wisdom, Prev: Multi-dimensional Array Format, Up: Other Important Topics Chris@10: Chris@10: 3.3 Words of Wisdom--Saving Plans Chris@10: ================================= Chris@10: Chris@10: FFTW implements a method for saving plans to disk and restoring them. Chris@10: In fact, what FFTW does is more general than just saving and loading Chris@10: plans. The mechanism is called "wisdom". Here, we describe this Chris@10: feature at a high level. *Note FFTW Reference::, for a less casual but Chris@10: more complete discussion of how to use wisdom in FFTW. Chris@10: Chris@10: Plans created with the `FFTW_MEASURE', `FFTW_PATIENT', or Chris@10: `FFTW_EXHAUSTIVE' options produce near-optimal FFT performance, but may Chris@10: require a long time to compute because FFTW must measure the runtime of Chris@10: many possible plans and select the best one. This setup is designed Chris@10: for the situations where so many transforms of the same size must be Chris@10: computed that the start-up time is irrelevant. For short Chris@10: initialization times, but slower transforms, we have provided Chris@10: `FFTW_ESTIMATE'. The `wisdom' mechanism is a way to get the best of Chris@10: both worlds: you compute a good plan once, save it to disk, and later Chris@10: reload it as many times as necessary. The wisdom mechanism can Chris@10: actually save and reload many plans at once, not just one. Chris@10: Chris@10: Whenever you create a plan, the FFTW planner accumulates wisdom, Chris@10: which is information sufficient to reconstruct the plan. After Chris@10: planning, you can save this information to disk by means of the Chris@10: function: Chris@10: int fftw_export_wisdom_to_filename(const char *filename); Chris@10: (This function returns non-zero on success.) Chris@10: Chris@10: The next time you run the program, you can restore the wisdom with Chris@10: `fftw_import_wisdom_from_filename' (which also returns non-zero on Chris@10: success), and then recreate the plan using the same flags as before. Chris@10: int fftw_import_wisdom_from_filename(const char *filename); Chris@10: Chris@10: Wisdom is automatically used for any size to which it is applicable, Chris@10: as long as the planner flags are not more "patient" than those with Chris@10: which the wisdom was created. For example, wisdom created with Chris@10: `FFTW_MEASURE' can be used if you later plan with `FFTW_ESTIMATE' or Chris@10: `FFTW_MEASURE', but not with `FFTW_PATIENT'. Chris@10: Chris@10: The `wisdom' is cumulative, and is stored in a global, private data Chris@10: structure managed internally by FFTW. The storage space required is Chris@10: minimal, proportional to the logarithm of the sizes the wisdom was Chris@10: generated from. If memory usage is a concern, however, the wisdom can Chris@10: be forgotten and its associated memory freed by calling: Chris@10: void fftw_forget_wisdom(void); Chris@10: Chris@10: Wisdom can be exported to a file, a string, or any other medium. Chris@10: For details, see *note Wisdom::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Caveats in Using Wisdom, Prev: Words of Wisdom-Saving Plans, Up: Other Important Topics Chris@10: Chris@10: 3.4 Caveats in Using Wisdom Chris@10: =========================== Chris@10: Chris@10: For in much wisdom is much grief, and he that increaseth knowledge Chris@10: increaseth sorrow. [Ecclesiastes 1:18] Chris@10: Chris@10: There are pitfalls to using wisdom, in that it can negate FFTW's Chris@10: ability to adapt to changing hardware and other conditions. For Chris@10: example, it would be perfectly possible to export wisdom from a program Chris@10: running on one processor and import it into a program running on Chris@10: another processor. Doing so, however, would mean that the second Chris@10: program would use plans optimized for the first processor, instead of Chris@10: the one it is running on. Chris@10: Chris@10: It should be safe to reuse wisdom as long as the hardware and program Chris@10: binaries remain unchanged. (Actually, the optimal plan may change even Chris@10: between runs of the same binary on identical hardware, due to Chris@10: differences in the virtual memory environment, etcetera. Users Chris@10: seriously interested in performance should worry about this problem, Chris@10: too.) It is likely that, if the same wisdom is used for two different Chris@10: program binaries, even running on the same machine, the plans may be Chris@10: sub-optimal because of differing code alignments. It is therefore wise Chris@10: to recreate wisdom every time an application is recompiled. The more Chris@10: the underlying hardware and software changes between the creation of Chris@10: wisdom and its use, the greater grows the risk of sub-optimal plans. Chris@10: Chris@10: Nevertheless, if the choice is between using `FFTW_ESTIMATE' or Chris@10: using possibly-suboptimal wisdom (created on the same machine, but for a Chris@10: different binary), the wisdom is likely to be better. For this reason, Chris@10: we provide a function to import wisdom from a standard system-wide Chris@10: location (`/etc/fftw/wisdom' on Unix): Chris@10: Chris@10: int fftw_import_system_wisdom(void); Chris@10: Chris@10: FFTW also provides a standalone program, `fftw-wisdom' (described by Chris@10: its own `man' page on Unix) with which users can create wisdom, e.g. Chris@10: for a canonical set of sizes to store in the system wisdom file. *Note Chris@10: Wisdom Utilities::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW Reference, Next: Multi-threaded FFTW, Prev: Other Important Topics, Up: Top Chris@10: Chris@10: 4 FFTW Reference Chris@10: **************** Chris@10: Chris@10: This chapter provides a complete reference for all sequential (i.e., Chris@10: one-processor) FFTW functions. Parallel transforms are described in Chris@10: later chapters. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Data Types and Files:: Chris@10: * Using Plans:: Chris@10: * Basic Interface:: Chris@10: * Advanced Interface:: Chris@10: * Guru Interface:: Chris@10: * New-array Execute Functions:: Chris@10: * Wisdom:: Chris@10: * What FFTW Really Computes:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Data Types and Files, Next: Using Plans, Prev: FFTW Reference, Up: FFTW Reference Chris@10: Chris@10: 4.1 Data Types and Files Chris@10: ======================== Chris@10: Chris@10: All programs using FFTW should include its header file: Chris@10: Chris@10: #include Chris@10: Chris@10: You must also link to the FFTW library. On Unix, this means adding Chris@10: `-lfftw3 -lm' at the _end_ of the link command. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Complex numbers:: Chris@10: * Precision:: Chris@10: * Memory Allocation:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Complex numbers, Next: Precision, Prev: Data Types and Files, Up: Data Types and Files Chris@10: Chris@10: 4.1.1 Complex numbers Chris@10: --------------------- Chris@10: Chris@10: The default FFTW interface uses `double' precision for all Chris@10: floating-point numbers, and defines a `fftw_complex' type to hold Chris@10: complex numbers as: Chris@10: Chris@10: typedef double fftw_complex[2]; Chris@10: Chris@10: Here, the `[0]' element holds the real part and the `[1]' element Chris@10: holds the imaginary part. Chris@10: Chris@10: Alternatively, if you have a C compiler (such as `gcc') that Chris@10: supports the C99 revision of the ANSI C standard, you can use C's new Chris@10: native complex type (which is binary-compatible with the typedef above). Chris@10: In particular, if you `#include ' _before_ `', then Chris@10: `fftw_complex' is defined to be the native complex type and you can Chris@10: manipulate it with ordinary arithmetic (e.g. `x = y * (3+4*I)', where Chris@10: `x' and `y' are `fftw_complex' and `I' is the standard symbol for the Chris@10: imaginary unit); Chris@10: Chris@10: C++ has its own `complex' template class, defined in the standard Chris@10: `' header file. Reportedly, the C++ standards committee has Chris@10: recently agreed to mandate that the storage format used for this type Chris@10: be binary-compatible with the C99 type, i.e. an array `T[2]' with Chris@10: consecutive real `[0]' and imaginary `[1]' parts. (See report Chris@10: `http://www.open-std.org/jtc1/sc22/WG21/docs/papers/2002/n1388.pdf Chris@10: WG21/N1388'.) Although not part of the official standard as of this Chris@10: writing, the proposal stated that: "This solution has been tested with Chris@10: all current major implementations of the standard library and shown to Chris@10: be working." To the extent that this is true, if you have a variable Chris@10: `complex *x', you can pass it directly to FFTW via Chris@10: `reinterpret_cast(x)'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Precision, Next: Memory Allocation, Prev: Complex numbers, Up: Data Types and Files Chris@10: Chris@10: 4.1.2 Precision Chris@10: --------------- Chris@10: Chris@10: You can install single and long-double precision versions of FFTW, Chris@10: which replace `double' with `float' and `long double', respectively Chris@10: (*note Installation and Customization::). To use these interfaces, you: Chris@10: Chris@10: * Link to the single/long-double libraries; on Unix, `-lfftw3f' or Chris@10: `-lfftw3l' instead of (or in addition to) `-lfftw3'. (You can Chris@10: link to the different-precision libraries simultaneously.) Chris@10: Chris@10: * Include the _same_ `' header file. Chris@10: Chris@10: * Replace all lowercase instances of `fftw_' with `fftwf_' or Chris@10: `fftwl_' for single or long-double precision, respectively. Chris@10: (`fftw_complex' becomes `fftwf_complex', `fftw_execute' becomes Chris@10: `fftwf_execute', etcetera.) Chris@10: Chris@10: * Uppercase names, i.e. names beginning with `FFTW_', remain the Chris@10: same. Chris@10: Chris@10: * Replace `double' with `float' or `long double' for subroutine Chris@10: parameters. Chris@10: Chris@10: Chris@10: Depending upon your compiler and/or hardware, `long double' may not Chris@10: be any more precise than `double' (or may not be supported at all, Chris@10: although it is standard in C99). Chris@10: Chris@10: We also support using the nonstandard `__float128' Chris@10: quadruple-precision type provided by recent versions of `gcc' on 32- Chris@10: and 64-bit x86 hardware (*note Installation and Customization::). To Chris@10: use this type, link with `-lfftw3q -lquadmath -lm' (the `libquadmath' Chris@10: library provided by `gcc' is needed for quadruple-precision Chris@10: trigonometric functions) and use `fftwq_' identifiers. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Memory Allocation, Prev: Precision, Up: Data Types and Files Chris@10: Chris@10: 4.1.3 Memory Allocation Chris@10: ----------------------- Chris@10: Chris@10: void *fftw_malloc(size_t n); Chris@10: void fftw_free(void *p); Chris@10: Chris@10: These are functions that behave identically to `malloc' and `free', Chris@10: except that they guarantee that the returned pointer obeys any special Chris@10: alignment restrictions imposed by any algorithm in FFTW (e.g. for SIMD Chris@10: acceleration). *Note SIMD alignment and fftw_malloc::. Chris@10: Chris@10: Data allocated by `fftw_malloc' _must_ be deallocated by `fftw_free' Chris@10: and not by the ordinary `free'. Chris@10: Chris@10: These routines simply call through to your operating system's Chris@10: `malloc' or, if necessary, its aligned equivalent (e.g. `memalign'), so Chris@10: you normally need not worry about any significant time or space Chris@10: overhead. You are _not required_ to use them to allocate your data, Chris@10: but we strongly recommend it. Chris@10: Chris@10: Note: in C++, just as with ordinary `malloc', you must typecast the Chris@10: output of `fftw_malloc' to whatever pointer type you are allocating. Chris@10: Chris@10: We also provide the following two convenience functions to allocate Chris@10: real and complex arrays with `n' elements, which are equivalent to Chris@10: `(double *) fftw_malloc(sizeof(double) * n)' and `(fftw_complex *) Chris@10: fftw_malloc(sizeof(fftw_complex) * n)', respectively: Chris@10: Chris@10: double *fftw_alloc_real(size_t n); Chris@10: fftw_complex *fftw_alloc_complex(size_t n); Chris@10: Chris@10: The equivalent functions in other precisions allocate arrays of `n' Chris@10: elements in that precision. e.g. `fftwf_alloc_real(n)' is equivalent Chris@10: to `(float *) fftwf_malloc(sizeof(float) * n)'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Using Plans, Next: Basic Interface, Prev: Data Types and Files, Up: FFTW Reference Chris@10: Chris@10: 4.2 Using Plans Chris@10: =============== Chris@10: Chris@10: Plans for all transform types in FFTW are stored as type `fftw_plan' Chris@10: (an opaque pointer type), and are created by one of the various Chris@10: planning routines described in the following sections. An `fftw_plan' Chris@10: contains all information necessary to compute the transform, including Chris@10: the pointers to the input and output arrays. Chris@10: Chris@10: void fftw_execute(const fftw_plan plan); Chris@10: Chris@10: This executes the `plan', to compute the corresponding transform on Chris@10: the arrays for which it was planned (which must still exist). The plan Chris@10: is not modified, and `fftw_execute' can be called as many times as Chris@10: desired. Chris@10: Chris@10: To apply a given plan to a different array, you can use the Chris@10: new-array execute interface. *Note New-array Execute Functions::. Chris@10: Chris@10: `fftw_execute' (and equivalents) is the only function in FFTW Chris@10: guaranteed to be thread-safe; see *note Thread safety::. Chris@10: Chris@10: This function: Chris@10: void fftw_destroy_plan(fftw_plan plan); Chris@10: deallocates the `plan' and all its associated data. Chris@10: Chris@10: FFTW's planner saves some other persistent data, such as the Chris@10: accumulated wisdom and a list of algorithms available in the current Chris@10: configuration. If you want to deallocate all of that and reset FFTW to Chris@10: the pristine state it was in when you started your program, you can Chris@10: call: Chris@10: Chris@10: void fftw_cleanup(void); Chris@10: Chris@10: After calling `fftw_cleanup', all existing plans become undefined, Chris@10: and you should not attempt to execute them nor to destroy them. You can Chris@10: however create and execute/destroy new plans, in which case FFTW starts Chris@10: accumulating wisdom information again. Chris@10: Chris@10: `fftw_cleanup' does not deallocate your plans, however. To prevent Chris@10: memory leaks, you must still call `fftw_destroy_plan' before executing Chris@10: `fftw_cleanup'. Chris@10: Chris@10: Occasionally, it may useful to know FFTW's internal "cost" metric Chris@10: that it uses to compare plans to one another; this cost is proportional Chris@10: to an execution time of the plan, in undocumented units, if the plan Chris@10: was created with the `FFTW_MEASURE' or other timing-based options, or Chris@10: alternatively is a heuristic cost function for `FFTW_ESTIMATE' plans. Chris@10: (The cost values of measured and estimated plans are not comparable, Chris@10: being in different units. Also, costs from different FFTW versions or Chris@10: the same version compiled differently may not be in the same units. Chris@10: Plans created from wisdom have a cost of 0 since no timing measurement Chris@10: is performed for them. Finally, certain problems for which only one Chris@10: top-level algorithm was possible may have required no measurements of Chris@10: the cost of the whole plan, in which case `fftw_cost' will also return Chris@10: 0.) The cost metric for a given plan is returned by: Chris@10: Chris@10: double fftw_cost(const fftw_plan plan); Chris@10: Chris@10: The following two routines are provided purely for academic purposes Chris@10: (that is, for entertainment). Chris@10: Chris@10: void fftw_flops(const fftw_plan plan, Chris@10: double *add, double *mul, double *fma); Chris@10: Chris@10: Given a `plan', set `add', `mul', and `fma' to an exact count of the Chris@10: number of floating-point additions, multiplications, and fused Chris@10: multiply-add operations involved in the plan's execution. The total Chris@10: number of floating-point operations (flops) is `add + mul + 2*fma', or Chris@10: `add + mul + fma' if the hardware supports fused multiply-add Chris@10: instructions (although the number of FMA operations is only approximate Chris@10: because of compiler voodoo). (The number of operations should be an Chris@10: integer, but we use `double' to avoid overflowing `int' for large Chris@10: transforms; the arguments are of type `double' even for single and Chris@10: long-double precision versions of FFTW.) Chris@10: Chris@10: void fftw_fprint_plan(const fftw_plan plan, FILE *output_file); Chris@10: void fftw_print_plan(const fftw_plan plan); Chris@10: Chris@10: This outputs a "nerd-readable" representation of the `plan' to the Chris@10: given file or to `stdout', respectively. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Basic Interface, Next: Advanced Interface, Prev: Using Plans, Up: FFTW Reference Chris@10: Chris@10: 4.3 Basic Interface Chris@10: =================== Chris@10: Chris@10: Recall that the FFTW API is divided into three parts(1): the "basic Chris@10: interface" computes a single transform of contiguous data, the "advanced Chris@10: interface" computes transforms of multiple or strided arrays, and the Chris@10: "guru interface" supports the most general data layouts, Chris@10: multiplicities, and strides. This section describes the the basic Chris@10: interface, which we expect to satisfy the needs of most users. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Complex DFTs:: Chris@10: * Planner Flags:: Chris@10: * Real-data DFTs:: Chris@10: * Real-data DFT Array Format:: Chris@10: * Real-to-Real Transforms:: Chris@10: * Real-to-Real Transform Kinds:: Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) Gallia est omnis divisa in partes tres (Julius Caesar). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Complex DFTs, Next: Planner Flags, Prev: Basic Interface, Up: Basic Interface Chris@10: Chris@10: 4.3.1 Complex DFTs Chris@10: ------------------ Chris@10: Chris@10: fftw_plan fftw_plan_dft_1d(int n0, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: fftw_plan fftw_plan_dft_2d(int n0, int n1, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: fftw_plan fftw_plan_dft_3d(int n0, int n1, int n2, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: fftw_plan fftw_plan_dft(int rank, const int *n, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: Plan a complex input/output discrete Fourier transform (DFT) in zero Chris@10: or more dimensions, returning an `fftw_plan' (*note Using Plans::). Chris@10: Chris@10: Once you have created a plan for a certain transform type and Chris@10: parameters, then creating another plan of the same type and parameters, Chris@10: but for different arrays, is fast and shares constant data with the Chris@10: first plan (if it still exists). Chris@10: Chris@10: The planner returns `NULL' if the plan cannot be created. In the Chris@10: standard FFTW distribution, the basic interface is guaranteed to return Chris@10: a non-`NULL' plan. A plan may be `NULL', however, if you are using a Chris@10: customized FFTW configuration supporting a restricted set of transforms. Chris@10: Chris@10: Arguments Chris@10: ......... Chris@10: Chris@10: * `rank' is the rank of the transform (it should be the size of the Chris@10: array `*n'), and can be any non-negative integer. (*Note Complex Chris@10: Multi-Dimensional DFTs::, for the definition of "rank".) The Chris@10: `_1d', `_2d', and `_3d' planners correspond to a `rank' of `1', Chris@10: `2', and `3', respectively. The rank may be zero, which is Chris@10: equivalent to a rank-1 transform of size 1, i.e. a copy of one Chris@10: number from input to output. Chris@10: Chris@10: * `n0', `n1', `n2', or `n[0..rank-1]' (as appropriate for each Chris@10: routine) specify the size of the transform dimensions. They can Chris@10: be any positive integer. Chris@10: Chris@10: - Multi-dimensional arrays are stored in row-major order with Chris@10: dimensions: `n0' x `n1'; or `n0' x `n1' x `n2'; or `n[0]' x Chris@10: `n[1]' x ... x `n[rank-1]'. *Note Multi-dimensional Array Chris@10: Format::. Chris@10: Chris@10: - FFTW is best at handling sizes of the form 2^a 3^b 5^c 7^d Chris@10: 11^e 13^f, where e+f is either 0 or 1, and the other exponents Chris@10: are arbitrary. Other sizes are computed by means of a slow, Chris@10: general-purpose algorithm (which nevertheless retains O(n log Chris@10: n) performance even for prime sizes). It is possible to Chris@10: customize FFTW for different array sizes; see *note Chris@10: Installation and Customization::. Transforms whose sizes are Chris@10: powers of 2 are especially fast. Chris@10: Chris@10: * `in' and `out' point to the input and output arrays of the Chris@10: transform, which may be the same (yielding an in-place transform). These Chris@10: arrays are overwritten during planning, unless `FFTW_ESTIMATE' is Chris@10: used in the flags. (The arrays need not be initialized, but they Chris@10: must be allocated.) Chris@10: Chris@10: If `in == out', the transform is "in-place" and the input array is Chris@10: overwritten. If `in != out', the two arrays must not overlap (but Chris@10: FFTW does not check for this condition). Chris@10: Chris@10: * `sign' is the sign of the exponent in the formula that defines the Chris@10: Fourier transform. It can be -1 (= `FFTW_FORWARD') or +1 (= Chris@10: `FFTW_BACKWARD'). Chris@10: Chris@10: * `flags' is a bitwise OR (`|') of zero or more planner flags, as Chris@10: defined in *note Planner Flags::. Chris@10: Chris@10: Chris@10: FFTW computes an unnormalized transform: computing a forward Chris@10: followed by a backward transform (or vice versa) will result in the Chris@10: original data multiplied by the size of the transform (the product of Chris@10: the dimensions). For more information, see *note What FFTW Really Chris@10: Computes::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Planner Flags, Next: Real-data DFTs, Prev: Complex DFTs, Up: Basic Interface Chris@10: Chris@10: 4.3.2 Planner Flags Chris@10: ------------------- Chris@10: Chris@10: All of the planner routines in FFTW accept an integer `flags' argument, Chris@10: which is a bitwise OR (`|') of zero or more of the flag constants Chris@10: defined below. These flags control the rigor (and time) of the Chris@10: planning process, and can also impose (or lift) restrictions on the Chris@10: type of transform algorithm that is employed. Chris@10: Chris@10: _Important:_ the planner overwrites the input array during planning Chris@10: unless a saved plan (*note Wisdom::) is available for that problem, so Chris@10: you should initialize your input data after creating the plan. The Chris@10: only exceptions to this are the `FFTW_ESTIMATE' and `FFTW_WISDOM_ONLY' Chris@10: flags, as mentioned below. Chris@10: Chris@10: In all cases, if wisdom is available for the given problem that Chris@10: was created with equal-or-greater planning rigor, then the more Chris@10: rigorous wisdom is used. For example, in `FFTW_ESTIMATE' mode any Chris@10: available wisdom is used, whereas in `FFTW_PATIENT' mode only wisdom Chris@10: created in patient or exhaustive mode can be used. *Note Words of Chris@10: Wisdom-Saving Plans::. Chris@10: Chris@10: Planning-rigor flags Chris@10: .................... Chris@10: Chris@10: * `FFTW_ESTIMATE' specifies that, instead of actual measurements of Chris@10: different algorithms, a simple heuristic is used to pick a Chris@10: (probably sub-optimal) plan quickly. With this flag, the Chris@10: input/output arrays are not overwritten during planning. Chris@10: Chris@10: * `FFTW_MEASURE' tells FFTW to find an optimized plan by actually Chris@10: _computing_ several FFTs and measuring their execution time. Chris@10: Depending on your machine, this can take some time (often a few Chris@10: seconds). `FFTW_MEASURE' is the default planning option. Chris@10: Chris@10: * `FFTW_PATIENT' is like `FFTW_MEASURE', but considers a wider range Chris@10: of algorithms and often produces a "more optimal" plan (especially Chris@10: for large transforms), but at the expense of several times longer Chris@10: planning time (especially for large transforms). Chris@10: Chris@10: * `FFTW_EXHAUSTIVE' is like `FFTW_PATIENT', but considers an even Chris@10: wider range of algorithms, including many that we think are Chris@10: unlikely to be fast, to produce the most optimal plan but with a Chris@10: substantially increased planning time. Chris@10: Chris@10: * `FFTW_WISDOM_ONLY' is a special planning mode in which the plan is Chris@10: only created if wisdom is available for the given problem, and Chris@10: otherwise a `NULL' plan is returned. This can be combined with Chris@10: other flags, e.g. `FFTW_WISDOM_ONLY | FFTW_PATIENT' creates a plan Chris@10: only if wisdom is available that was created in `FFTW_PATIENT' or Chris@10: `FFTW_EXHAUSTIVE' mode. The `FFTW_WISDOM_ONLY' flag is intended Chris@10: for users who need to detect whether wisdom is available; for Chris@10: example, if wisdom is not available one may wish to allocate new Chris@10: arrays for planning so that user data is not overwritten. Chris@10: Chris@10: Chris@10: Algorithm-restriction flags Chris@10: ........................... Chris@10: Chris@10: * `FFTW_DESTROY_INPUT' specifies that an out-of-place transform is Chris@10: allowed to _overwrite its input_ array with arbitrary data; this Chris@10: can sometimes allow more efficient algorithms to be employed. Chris@10: Chris@10: * `FFTW_PRESERVE_INPUT' specifies that an out-of-place transform must Chris@10: _not change its input_ array. This is ordinarily the _default_, Chris@10: except for c2r and hc2r (i.e. complex-to-real) transforms for Chris@10: which `FFTW_DESTROY_INPUT' is the default. In the latter cases, Chris@10: passing `FFTW_PRESERVE_INPUT' will attempt to use algorithms that Chris@10: do not destroy the input, at the expense of worse performance; for Chris@10: multi-dimensional c2r transforms, however, no input-preserving Chris@10: algorithms are implemented and the planner will return `NULL' if Chris@10: one is requested. Chris@10: Chris@10: * `FFTW_UNALIGNED' specifies that the algorithm may not impose any Chris@10: unusual alignment requirements on the input/output arrays (i.e. no Chris@10: SIMD may be used). This flag is normally _not necessary_, since Chris@10: the planner automatically detects misaligned arrays. The only use Chris@10: for this flag is if you want to use the new-array execute Chris@10: interface to execute a given plan on a different array that may Chris@10: not be aligned like the original. (Using `fftw_malloc' makes this Chris@10: flag unnecessary even then.) Chris@10: Chris@10: Chris@10: Limiting planning time Chris@10: ...................... Chris@10: Chris@10: extern void fftw_set_timelimit(double seconds); Chris@10: Chris@10: This function instructs FFTW to spend at most `seconds' seconds Chris@10: (approximately) in the planner. If `seconds == FFTW_NO_TIMELIMIT' (the Chris@10: default value, which is negative), then planning time is unbounded. Chris@10: Otherwise, FFTW plans with a progressively wider range of algorithms Chris@10: until the the given time limit is reached or the given range of Chris@10: algorithms is explored, returning the best available plan. Chris@10: Chris@10: For example, specifying `FFTW_PATIENT' first plans in Chris@10: `FFTW_ESTIMATE' mode, then in `FFTW_MEASURE' mode, then finally (time Chris@10: permitting) in `FFTW_PATIENT'. If `FFTW_EXHAUSTIVE' is specified Chris@10: instead, the planner will further progress to `FFTW_EXHAUSTIVE' mode. Chris@10: Chris@10: Note that the `seconds' argument specifies only a rough limit; in Chris@10: practice, the planner may use somewhat more time if the time limit is Chris@10: reached when the planner is in the middle of an operation that cannot Chris@10: be interrupted. At the very least, the planner will complete planning Chris@10: in `FFTW_ESTIMATE' mode (which is thus equivalent to a time limit of 0). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Real-data DFTs, Next: Real-data DFT Array Format, Prev: Planner Flags, Up: Basic Interface Chris@10: Chris@10: 4.3.3 Real-data DFTs Chris@10: -------------------- Chris@10: Chris@10: fftw_plan fftw_plan_dft_r2c_1d(int n0, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_r2c_2d(int n0, int n1, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_r2c_3d(int n0, int n1, int n2, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_r2c(int rank, const int *n, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: Chris@10: Plan a real-input/complex-output discrete Fourier transform (DFT) in Chris@10: zero or more dimensions, returning an `fftw_plan' (*note Using Plans::). Chris@10: Chris@10: Once you have created a plan for a certain transform type and Chris@10: parameters, then creating another plan of the same type and parameters, Chris@10: but for different arrays, is fast and shares constant data with the Chris@10: first plan (if it still exists). Chris@10: Chris@10: The planner returns `NULL' if the plan cannot be created. A Chris@10: non-`NULL' plan is always returned by the basic interface unless you Chris@10: are using a customized FFTW configuration supporting a restricted set Chris@10: of transforms, or if you use the `FFTW_PRESERVE_INPUT' flag with a Chris@10: multi-dimensional out-of-place c2r transform (see below). Chris@10: Chris@10: Arguments Chris@10: ......... Chris@10: Chris@10: * `rank' is the rank of the transform (it should be the size of the Chris@10: array `*n'), and can be any non-negative integer. (*Note Complex Chris@10: Multi-Dimensional DFTs::, for the definition of "rank".) The Chris@10: `_1d', `_2d', and `_3d' planners correspond to a `rank' of `1', Chris@10: `2', and `3', respectively. The rank may be zero, which is Chris@10: equivalent to a rank-1 transform of size 1, i.e. a copy of one Chris@10: real number (with zero imaginary part) from input to output. Chris@10: Chris@10: * `n0', `n1', `n2', or `n[0..rank-1]', (as appropriate for each Chris@10: routine) specify the size of the transform dimensions. They can Chris@10: be any positive integer. This is different in general from the Chris@10: _physical_ array dimensions, which are described in *note Chris@10: Real-data DFT Array Format::. Chris@10: Chris@10: - FFTW is best at handling sizes of the form 2^a 3^b 5^c 7^d Chris@10: 11^e 13^f, where e+f is either 0 or 1, and the other exponents Chris@10: are arbitrary. Other sizes are computed by means of a slow, Chris@10: general-purpose algorithm (which nevertheless retains O(n log Chris@10: n) performance even for prime sizes). (It is possible to Chris@10: customize FFTW for different array sizes; see *note Chris@10: Installation and Customization::.) Transforms whose sizes Chris@10: are powers of 2 are especially fast, and it is generally Chris@10: beneficial for the _last_ dimension of an r2c/c2r transform Chris@10: to be _even_. Chris@10: Chris@10: * `in' and `out' point to the input and output arrays of the Chris@10: transform, which may be the same (yielding an in-place transform). These Chris@10: arrays are overwritten during planning, unless `FFTW_ESTIMATE' is Chris@10: used in the flags. (The arrays need not be initialized, but they Chris@10: must be allocated.) For an in-place transform, it is important to Chris@10: remember that the real array will require padding, described in Chris@10: *note Real-data DFT Array Format::. Chris@10: Chris@10: * `flags' is a bitwise OR (`|') of zero or more planner flags, as Chris@10: defined in *note Planner Flags::. Chris@10: Chris@10: Chris@10: The inverse transforms, taking complex input (storing the Chris@10: non-redundant half of a logically Hermitian array) to real output, are Chris@10: given by: Chris@10: Chris@10: fftw_plan fftw_plan_dft_c2r_1d(int n0, Chris@10: fftw_complex *in, double *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_c2r_2d(int n0, int n1, Chris@10: fftw_complex *in, double *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_c2r_3d(int n0, int n1, int n2, Chris@10: fftw_complex *in, double *out, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_dft_c2r(int rank, const int *n, Chris@10: fftw_complex *in, double *out, Chris@10: unsigned flags); Chris@10: Chris@10: The arguments are the same as for the r2c transforms, except that the Chris@10: input and output data formats are reversed. Chris@10: Chris@10: FFTW computes an unnormalized transform: computing an r2c followed Chris@10: by a c2r transform (or vice versa) will result in the original data Chris@10: multiplied by the size of the transform (the product of the logical Chris@10: dimensions). An r2c transform produces the same output as a Chris@10: `FFTW_FORWARD' complex DFT of the same input, and a c2r transform is Chris@10: correspondingly equivalent to `FFTW_BACKWARD'. For more information, Chris@10: see *note What FFTW Really Computes::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Real-data DFT Array Format, Next: Real-to-Real Transforms, Prev: Real-data DFTs, Up: Basic Interface Chris@10: Chris@10: 4.3.4 Real-data DFT Array Format Chris@10: -------------------------------- Chris@10: Chris@10: The output of a DFT of real data (r2c) contains symmetries that, in Chris@10: principle, make half of the outputs redundant (*note What FFTW Really Chris@10: Computes::). (Similarly for the input of an inverse c2r transform.) In Chris@10: practice, it is not possible to entirely realize these savings in an Chris@10: efficient and understandable format that generalizes to Chris@10: multi-dimensional transforms. Instead, the output of the r2c Chris@10: transforms is _slightly_ over half of the output of the corresponding Chris@10: complex transform. We do not "pack" the data in any way, but store it Chris@10: as an ordinary array of `fftw_complex' values. In fact, this data is Chris@10: simply a subsection of what would be the array in the corresponding Chris@10: complex transform. Chris@10: Chris@10: Specifically, for a real transform of d (= `rank') dimensions n[0] x Chris@10: n[1] x n[2] x ... x n[d-1] , the complex data is an n[0] x n[1] x n[2] Chris@10: x ... x (n[d-1]/2 + 1) array of `fftw_complex' values in row-major Chris@10: order (with the division rounded down). That is, we only store the Chris@10: _lower_ half (non-negative frequencies), plus one element, of the last Chris@10: dimension of the data from the ordinary complex transform. (We could Chris@10: have instead taken half of any other dimension, but implementation Chris@10: turns out to be simpler if the last, contiguous, dimension is used.) Chris@10: Chris@10: For an out-of-place transform, the real data is simply an array with Chris@10: physical dimensions n[0] x n[1] x n[2] x ... x n[d-1] in row-major Chris@10: order. Chris@10: Chris@10: For an in-place transform, some complications arise since the Chris@10: complex data is slightly larger than the real data. In this case, the Chris@10: final dimension of the real data must be _padded_ with extra values to Chris@10: accommodate the size of the complex data--two extra if the last Chris@10: dimension is even and one if it is odd. That is, the last dimension of Chris@10: the real data must physically contain 2 * (n[d-1]/2+1) `double' values Chris@10: (exactly enough to hold the complex data). This physical array size Chris@10: does not, however, change the _logical_ array size--only n[d-1] values Chris@10: are actually stored in the last dimension, and n[d-1] is the last Chris@10: dimension passed to the planner. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Real-to-Real Transforms, Next: Real-to-Real Transform Kinds, Prev: Real-data DFT Array Format, Up: Basic Interface Chris@10: Chris@10: 4.3.5 Real-to-Real Transforms Chris@10: ----------------------------- Chris@10: Chris@10: fftw_plan fftw_plan_r2r_1d(int n, double *in, double *out, Chris@10: fftw_r2r_kind kind, unsigned flags); Chris@10: fftw_plan fftw_plan_r2r_2d(int n0, int n1, double *in, double *out, Chris@10: fftw_r2r_kind kind0, fftw_r2r_kind kind1, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_r2r_3d(int n0, int n1, int n2, Chris@10: double *in, double *out, Chris@10: fftw_r2r_kind kind0, Chris@10: fftw_r2r_kind kind1, Chris@10: fftw_r2r_kind kind2, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_r2r(int rank, const int *n, double *in, double *out, Chris@10: const fftw_r2r_kind *kind, unsigned flags); Chris@10: Chris@10: Plan a real input/output (r2r) transform of various kinds in zero or Chris@10: more dimensions, returning an `fftw_plan' (*note Using Plans::). Chris@10: Chris@10: Once you have created a plan for a certain transform type and Chris@10: parameters, then creating another plan of the same type and parameters, Chris@10: but for different arrays, is fast and shares constant data with the Chris@10: first plan (if it still exists). Chris@10: Chris@10: The planner returns `NULL' if the plan cannot be created. A Chris@10: non-`NULL' plan is always returned by the basic interface unless you Chris@10: are using a customized FFTW configuration supporting a restricted set Chris@10: of transforms, or for size-1 `FFTW_REDFT00' kinds (which are not Chris@10: defined). Chris@10: Chris@10: Arguments Chris@10: ......... Chris@10: Chris@10: * `rank' is the dimensionality of the transform (it should be the Chris@10: size of the arrays `*n' and `*kind'), and can be any non-negative Chris@10: integer. The `_1d', `_2d', and `_3d' planners correspond to a Chris@10: `rank' of `1', `2', and `3', respectively. A `rank' of zero is Chris@10: equivalent to a copy of one number from input to output. Chris@10: Chris@10: * `n', or `n0'/`n1'/`n2', or `n[rank]', respectively, gives the Chris@10: (physical) size of the transform dimensions. They can be any Chris@10: positive integer. Chris@10: Chris@10: - Multi-dimensional arrays are stored in row-major order with Chris@10: dimensions: `n0' x `n1'; or `n0' x `n1' x `n2'; or `n[0]' x Chris@10: `n[1]' x ... x `n[rank-1]'. *Note Multi-dimensional Array Chris@10: Format::. Chris@10: Chris@10: - FFTW is generally best at handling sizes of the form 2^a 3^b Chris@10: 5^c 7^d 11^e 13^f, where e+f is either 0 or 1, and the other Chris@10: exponents are arbitrary. Other sizes are computed by means Chris@10: of a slow, general-purpose algorithm (which nevertheless Chris@10: retains O(n log n) performance even for prime sizes). (It Chris@10: is possible to customize FFTW for different array sizes; see Chris@10: *note Installation and Customization::.) Transforms whose Chris@10: sizes are powers of 2 are especially fast. Chris@10: Chris@10: - For a `REDFT00' or `RODFT00' transform kind in a dimension of Chris@10: size n, it is n-1 or n+1, respectively, that should be Chris@10: factorizable in the above form. Chris@10: Chris@10: * `in' and `out' point to the input and output arrays of the Chris@10: transform, which may be the same (yielding an in-place transform). These Chris@10: arrays are overwritten during planning, unless `FFTW_ESTIMATE' is Chris@10: used in the flags. (The arrays need not be initialized, but they Chris@10: must be allocated.) Chris@10: Chris@10: * `kind', or `kind0'/`kind1'/`kind2', or `kind[rank]', is the kind Chris@10: of r2r transform used for the corresponding dimension. The valid Chris@10: kind constants are described in *note Real-to-Real Transform Chris@10: Kinds::. In a multi-dimensional transform, what is computed is Chris@10: the separable product formed by taking each transform kind along Chris@10: the corresponding dimension, one dimension after another. Chris@10: Chris@10: * `flags' is a bitwise OR (`|') of zero or more planner flags, as Chris@10: defined in *note Planner Flags::. Chris@10: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Real-to-Real Transform Kinds, Prev: Real-to-Real Transforms, Up: Basic Interface Chris@10: Chris@10: 4.3.6 Real-to-Real Transform Kinds Chris@10: ---------------------------------- Chris@10: Chris@10: FFTW currently supports 11 different r2r transform kinds, specified by Chris@10: one of the constants below. For the precise definitions of these Chris@10: transforms, see *note What FFTW Really Computes::. For a more Chris@10: colloquial introduction to these transform kinds, see *note More DFTs Chris@10: of Real Data::. Chris@10: Chris@10: For dimension of size `n', there is a corresponding "logical" Chris@10: dimension `N' that determines the normalization (and the optimal Chris@10: factorization); the formula for `N' is given for each kind below. Chris@10: Also, with each transform kind is listed its corrsponding inverse Chris@10: transform. FFTW computes unnormalized transforms: a transform followed Chris@10: by its inverse will result in the original data multiplied by `N' (or Chris@10: the product of the `N''s for each dimension, in multi-dimensions). Chris@10: Chris@10: * `FFTW_R2HC' computes a real-input DFT with output in "halfcomplex" Chris@10: format, i.e. real and imaginary parts for a transform of size `n' Chris@10: stored as: r0, r1, r2, r(n/2), i((n+1)/2-1), ..., i2, i1 (Logical Chris@10: `N=n', inverse is `FFTW_HC2R'.) Chris@10: Chris@10: * `FFTW_HC2R' computes the reverse of `FFTW_R2HC', above. (Logical Chris@10: `N=n', inverse is `FFTW_R2HC'.) Chris@10: Chris@10: * `FFTW_DHT' computes a discrete Hartley transform. (Logical `N=n', Chris@10: inverse is `FFTW_DHT'.) Chris@10: Chris@10: * `FFTW_REDFT00' computes an REDFT00 transform, i.e. a DCT-I. Chris@10: (Logical `N=2*(n-1)', inverse is `FFTW_REDFT00'.) Chris@10: Chris@10: * `FFTW_REDFT10' computes an REDFT10 transform, i.e. a DCT-II Chris@10: (sometimes called "the" DCT). (Logical `N=2*n', inverse is Chris@10: `FFTW_REDFT01'.) Chris@10: Chris@10: * `FFTW_REDFT01' computes an REDFT01 transform, i.e. a DCT-III Chris@10: (sometimes called "the" IDCT, being the inverse of DCT-II). Chris@10: (Logical `N=2*n', inverse is `FFTW_REDFT=10'.) Chris@10: Chris@10: * `FFTW_REDFT11' computes an REDFT11 transform, i.e. a DCT-IV. Chris@10: (Logical `N=2*n', inverse is `FFTW_REDFT11'.) Chris@10: Chris@10: * `FFTW_RODFT00' computes an RODFT00 transform, i.e. a DST-I. Chris@10: (Logical `N=2*(n+1)', inverse is `FFTW_RODFT00'.) Chris@10: Chris@10: * `FFTW_RODFT10' computes an RODFT10 transform, i.e. a DST-II. Chris@10: (Logical `N=2*n', inverse is `FFTW_RODFT01'.) Chris@10: Chris@10: * `FFTW_RODFT01' computes an RODFT01 transform, i.e. a DST-III. Chris@10: (Logical `N=2*n', inverse is `FFTW_RODFT=10'.) Chris@10: Chris@10: * `FFTW_RODFT11' computes an RODFT11 transform, i.e. a DST-IV. Chris@10: (Logical `N=2*n', inverse is `FFTW_RODFT11'.) Chris@10: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Advanced Interface, Next: Guru Interface, Prev: Basic Interface, Up: FFTW Reference Chris@10: Chris@10: 4.4 Advanced Interface Chris@10: ====================== Chris@10: Chris@10: FFTW's "advanced" interface supplements the basic interface with four Chris@10: new planner routines, providing a new level of flexibility: you can plan Chris@10: a transform of multiple arrays simultaneously, operate on non-contiguous Chris@10: (strided) data, and transform a subset of a larger multi-dimensional Chris@10: array. Other than these additional features, the planner operates in Chris@10: the same fashion as in the basic interface, and the resulting Chris@10: `fftw_plan' is used in the same way (*note Using Plans::). Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Advanced Complex DFTs:: Chris@10: * Advanced Real-data DFTs:: Chris@10: * Advanced Real-to-real Transforms:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Advanced Complex DFTs, Next: Advanced Real-data DFTs, Prev: Advanced Interface, Up: Advanced Interface Chris@10: Chris@10: 4.4.1 Advanced Complex DFTs Chris@10: --------------------------- Chris@10: Chris@10: fftw_plan fftw_plan_many_dft(int rank, const int *n, int howmany, Chris@10: fftw_complex *in, const int *inembed, Chris@10: int istride, int idist, Chris@10: fftw_complex *out, const int *onembed, Chris@10: int ostride, int odist, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: This routine plans multiple multidimensional complex DFTs, and it Chris@10: extends the `fftw_plan_dft' routine (*note Complex DFTs::) to compute Chris@10: `howmany' transforms, each having rank `rank' and size `n'. In Chris@10: addition, the transform data need not be contiguous, but it may be laid Chris@10: out in memory with an arbitrary stride. To account for these Chris@10: possibilities, `fftw_plan_many_dft' adds the new parameters `howmany', Chris@10: {`i',`o'}`nembed', {`i',`o'}`stride', and {`i',`o'}`dist'. The FFTW Chris@10: basic interface (*note Complex DFTs::) provides routines specialized Chris@10: for ranks 1, 2, and 3, but the advanced interface handles only the Chris@10: general-rank case. Chris@10: Chris@10: `howmany' is the number of transforms to compute. The resulting Chris@10: plan computes `howmany' transforms, where the input of the `k'-th Chris@10: transform is at location `in+k*idist' (in C pointer arithmetic), and Chris@10: its output is at location `out+k*odist'. Plans obtained in this way Chris@10: can often be faster than calling FFTW multiple times for the individual Chris@10: transforms. The basic `fftw_plan_dft' interface corresponds to Chris@10: `howmany=1' (in which case the `dist' parameters are ignored). Chris@10: Chris@10: Each of the `howmany' transforms has rank `rank' and size `n', as in Chris@10: the basic interface. In addition, the advanced interface allows the Chris@10: input and output arrays of each transform to be row-major subarrays of Chris@10: larger rank-`rank' arrays, described by `inembed' and `onembed' Chris@10: parameters, respectively. {`i',`o'}`nembed' must be arrays of length Chris@10: `rank', and `n' should be elementwise less than or equal to Chris@10: {`i',`o'}`nembed'. Passing `NULL' for an `nembed' parameter is Chris@10: equivalent to passing `n' (i.e. same physical and logical dimensions, Chris@10: as in the basic interface.) Chris@10: Chris@10: The `stride' parameters indicate that the `j'-th element of the Chris@10: input or output arrays is located at `j*istride' or `j*ostride', Chris@10: respectively. (For a multi-dimensional array, `j' is the ordinary Chris@10: row-major index.) When combined with the `k'-th transform in a Chris@10: `howmany' loop, from above, this means that the (`j',`k')-th element is Chris@10: at `j*stride+k*dist'. (The basic `fftw_plan_dft' interface corresponds Chris@10: to a stride of 1.) Chris@10: Chris@10: For in-place transforms, the input and output `stride' and `dist' Chris@10: parameters should be the same; otherwise, the planner may return `NULL'. Chris@10: Chris@10: Arrays `n', `inembed', and `onembed' are not used after this Chris@10: function returns. You can safely free or reuse them. Chris@10: Chris@10: *Examples*: One transform of one 5 by 6 array contiguous in memory: Chris@10: int rank = 2; Chris@10: int n[] = {5, 6}; Chris@10: int howmany = 1; Chris@10: int idist = odist = 0; /* unused because howmany = 1 */ Chris@10: int istride = ostride = 1; /* array is contiguous in memory */ Chris@10: int *inembed = n, *onembed = n; Chris@10: Chris@10: Transform of three 5 by 6 arrays, each contiguous in memory, stored Chris@10: in memory one after another: Chris@10: int rank = 2; Chris@10: int n[] = {5, 6}; Chris@10: int howmany = 3; Chris@10: int idist = odist = n[0]*n[1]; /* = 30, the distance in memory Chris@10: between the first element Chris@10: of the first array and the Chris@10: first element of the second array */ Chris@10: int istride = ostride = 1; /* array is contiguous in memory */ Chris@10: int *inembed = n, *onembed = n; Chris@10: Chris@10: Transform each column of a 2d array with 10 rows and 3 columns: Chris@10: int rank = 1; /* not 2: we are computing 1d transforms */ Chris@10: int n[] = {10}; /* 1d transforms of length 10 */ Chris@10: int howmany = 3; Chris@10: int idist = odist = 1; Chris@10: int istride = ostride = 3; /* distance between two elements in Chris@10: the same column */ Chris@10: int *inembed = n, *onembed = n; Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Advanced Real-data DFTs, Next: Advanced Real-to-real Transforms, Prev: Advanced Complex DFTs, Up: Advanced Interface Chris@10: Chris@10: 4.4.2 Advanced Real-data DFTs Chris@10: ----------------------------- Chris@10: Chris@10: fftw_plan fftw_plan_many_dft_r2c(int rank, const int *n, int howmany, Chris@10: double *in, const int *inembed, Chris@10: int istride, int idist, Chris@10: fftw_complex *out, const int *onembed, Chris@10: int ostride, int odist, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_plan_many_dft_c2r(int rank, const int *n, int howmany, Chris@10: fftw_complex *in, const int *inembed, Chris@10: int istride, int idist, Chris@10: double *out, const int *onembed, Chris@10: int ostride, int odist, Chris@10: unsigned flags); Chris@10: Chris@10: Like `fftw_plan_many_dft', these two functions add `howmany', Chris@10: `nembed', `stride', and `dist' parameters to the `fftw_plan_dft_r2c' Chris@10: and `fftw_plan_dft_c2r' functions, but otherwise behave the same as the Chris@10: basic interface. Chris@10: Chris@10: The interpretation of `howmany', `stride', and `dist' are the same Chris@10: as for `fftw_plan_many_dft', above. Note that the `stride' and `dist' Chris@10: for the real array are in units of `double', and for the complex array Chris@10: are in units of `fftw_complex'. Chris@10: Chris@10: If an `nembed' parameter is `NULL', it is interpreted as what it Chris@10: would be in the basic interface, as described in *note Real-data DFT Chris@10: Array Format::. That is, for the complex array the size is assumed to Chris@10: be the same as `n', but with the last dimension cut roughly in half. Chris@10: For the real array, the size is assumed to be `n' if the transform is Chris@10: out-of-place, or `n' with the last dimension "padded" if the transform Chris@10: is in-place. Chris@10: Chris@10: If an `nembed' parameter is non-`NULL', it is interpreted as the Chris@10: physical size of the corresponding array, in row-major order, just as Chris@10: for `fftw_plan_many_dft'. In this case, each dimension of `nembed' Chris@10: should be `>=' what it would be in the basic interface (e.g. the halved Chris@10: or padded `n'). Chris@10: Chris@10: Arrays `n', `inembed', and `onembed' are not used after this Chris@10: function returns. You can safely free or reuse them. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Advanced Real-to-real Transforms, Prev: Advanced Real-data DFTs, Up: Advanced Interface Chris@10: Chris@10: 4.4.3 Advanced Real-to-real Transforms Chris@10: -------------------------------------- Chris@10: Chris@10: fftw_plan fftw_plan_many_r2r(int rank, const int *n, int howmany, Chris@10: double *in, const int *inembed, Chris@10: int istride, int idist, Chris@10: double *out, const int *onembed, Chris@10: int ostride, int odist, Chris@10: const fftw_r2r_kind *kind, unsigned flags); Chris@10: Chris@10: Like `fftw_plan_many_dft', this functions adds `howmany', `nembed', Chris@10: `stride', and `dist' parameters to the `fftw_plan_r2r' function, but Chris@10: otherwise behave the same as the basic interface. The interpretation Chris@10: of those additional parameters are the same as for Chris@10: `fftw_plan_many_dft'. (Of course, the `stride' and `dist' parameters Chris@10: are now in units of `double', not `fftw_complex'.) Chris@10: Chris@10: Arrays `n', `inembed', `onembed', and `kind' are not used after this Chris@10: function returns. You can safely free or reuse them. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Guru Interface, Next: New-array Execute Functions, Prev: Advanced Interface, Up: FFTW Reference Chris@10: Chris@10: 4.5 Guru Interface Chris@10: ================== Chris@10: Chris@10: The "guru" interface to FFTW is intended to expose as much as possible Chris@10: of the flexibility in the underlying FFTW architecture. It allows one Chris@10: to compute multi-dimensional "vectors" (loops) of multi-dimensional Chris@10: transforms, where each vector/transform dimension has an independent Chris@10: size and stride. One can also use more general complex-number formats, Chris@10: e.g. separate real and imaginary arrays. Chris@10: Chris@10: For those users who require the flexibility of the guru interface, Chris@10: it is important that they pay special attention to the documentation Chris@10: lest they shoot themselves in the foot. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Interleaved and split arrays:: Chris@10: * Guru vector and transform sizes:: Chris@10: * Guru Complex DFTs:: Chris@10: * Guru Real-data DFTs:: Chris@10: * Guru Real-to-real Transforms:: Chris@10: * 64-bit Guru Interface:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Interleaved and split arrays, Next: Guru vector and transform sizes, Prev: Guru Interface, Up: Guru Interface Chris@10: Chris@10: 4.5.1 Interleaved and split arrays Chris@10: ---------------------------------- Chris@10: Chris@10: The guru interface supports two representations of complex numbers, Chris@10: which we call the interleaved and the split format. Chris@10: Chris@10: The "interleaved" format is the same one used by the basic and Chris@10: advanced interfaces, and it is documented in *note Complex numbers::. Chris@10: In the interleaved format, you provide pointers to the real part of a Chris@10: complex number, and the imaginary part understood to be stored in the Chris@10: next memory location. Chris@10: Chris@10: The "split" format allows separate pointers to the real and Chris@10: imaginary parts of a complex array. Chris@10: Chris@10: Technically, the interleaved format is redundant, because you can Chris@10: always express an interleaved array in terms of a split array with Chris@10: appropriate pointers and strides. On the other hand, the interleaved Chris@10: format is simpler to use, and it is common in practice. Hence, FFTW Chris@10: supports it as a special case. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Guru vector and transform sizes, Next: Guru Complex DFTs, Prev: Interleaved and split arrays, Up: Guru Interface Chris@10: Chris@10: 4.5.2 Guru vector and transform sizes Chris@10: ------------------------------------- Chris@10: Chris@10: The guru interface introduces one basic new data structure, Chris@10: `fftw_iodim', that is used to specify sizes and strides for Chris@10: multi-dimensional transforms and vectors: Chris@10: Chris@10: typedef struct { Chris@10: int n; Chris@10: int is; Chris@10: int os; Chris@10: } fftw_iodim; Chris@10: Chris@10: Here, `n' is the size of the dimension, and `is' and `os' are the Chris@10: strides of that dimension for the input and output arrays. (The stride Chris@10: is the separation of consecutive elements along this dimension.) Chris@10: Chris@10: The meaning of the stride parameter depends on the type of the array Chris@10: that the stride refers to. _If the array is interleaved complex, Chris@10: strides are expressed in units of complex numbers (`fftw_complex'). If Chris@10: the array is split complex or real, strides are expressed in units of Chris@10: real numbers (`double')._ This convention is consistent with the usual Chris@10: pointer arithmetic in the C language. An interleaved array is denoted Chris@10: by a pointer `p' to `fftw_complex', so that `p+1' points to the next Chris@10: complex number. Split arrays are denoted by pointers to `double', in Chris@10: which case pointer arithmetic operates in units of `sizeof(double)'. Chris@10: Chris@10: The guru planner interfaces all take a (`rank', `dims[rank]') pair Chris@10: describing the transform size, and a (`howmany_rank', Chris@10: `howmany_dims[howmany_rank]') pair describing the "vector" size (a Chris@10: multi-dimensional loop of transforms to perform), where `dims' and Chris@10: `howmany_dims' are arrays of `fftw_iodim'. Chris@10: Chris@10: For example, the `howmany' parameter in the advanced complex-DFT Chris@10: interface corresponds to `howmany_rank' = 1, `howmany_dims[0].n' = Chris@10: `howmany', `howmany_dims[0].is' = `idist', and `howmany_dims[0].os' = Chris@10: `odist'. (To compute a single transform, you can just use Chris@10: `howmany_rank' = 0.) Chris@10: Chris@10: A row-major multidimensional array with dimensions `n[rank]' (*note Chris@10: Row-major Format::) corresponds to `dims[i].n' = `n[i]' and the Chris@10: recurrence `dims[i].is' = `n[i+1] * dims[i+1].is' (similarly for `os'). Chris@10: The stride of the last (`i=rank-1') dimension is the overall stride of Chris@10: the array. e.g. to be equivalent to the advanced complex-DFT Chris@10: interface, you would have `dims[rank-1].is' = `istride' and Chris@10: `dims[rank-1].os' = `ostride'. Chris@10: Chris@10: In general, we only guarantee FFTW to return a non-`NULL' plan if Chris@10: the vector and transform dimensions correspond to a set of distinct Chris@10: indices, and for in-place transforms the input/output strides should be Chris@10: the same. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Guru Complex DFTs, Next: Guru Real-data DFTs, Prev: Guru vector and transform sizes, Up: Guru Interface Chris@10: Chris@10: 4.5.3 Guru Complex DFTs Chris@10: ----------------------- Chris@10: Chris@10: fftw_plan fftw_plan_guru_dft( Chris@10: int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, const fftw_iodim *howmany_dims, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: fftw_plan fftw_plan_guru_split_dft( Chris@10: int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, const fftw_iodim *howmany_dims, Chris@10: double *ri, double *ii, double *ro, double *io, Chris@10: unsigned flags); Chris@10: Chris@10: These two functions plan a complex-data, multi-dimensional DFT for Chris@10: the interleaved and split format, respectively. Transform dimensions Chris@10: are given by (`rank', `dims') over a multi-dimensional vector (loop) of Chris@10: dimensions (`howmany_rank', `howmany_dims'). `dims' and `howmany_dims' Chris@10: should point to `fftw_iodim' arrays of length `rank' and Chris@10: `howmany_rank', respectively. Chris@10: Chris@10: `flags' is a bitwise OR (`|') of zero or more planner flags, as Chris@10: defined in *note Planner Flags::. Chris@10: Chris@10: In the `fftw_plan_guru_dft' function, the pointers `in' and `out' Chris@10: point to the interleaved input and output arrays, respectively. The Chris@10: sign can be either -1 (= `FFTW_FORWARD') or +1 (= `FFTW_BACKWARD'). If Chris@10: the pointers are equal, the transform is in-place. Chris@10: Chris@10: In the `fftw_plan_guru_split_dft' function, `ri' and `ii' point to Chris@10: the real and imaginary input arrays, and `ro' and `io' point to the Chris@10: real and imaginary output arrays. The input and output pointers may be Chris@10: the same, indicating an in-place transform. For example, for Chris@10: `fftw_complex' pointers `in' and `out', the corresponding parameters Chris@10: are: Chris@10: Chris@10: ri = (double *) in; Chris@10: ii = (double *) in + 1; Chris@10: ro = (double *) out; Chris@10: io = (double *) out + 1; Chris@10: Chris@10: Because `fftw_plan_guru_split_dft' accepts split arrays, strides are Chris@10: expressed in units of `double'. For a contiguous `fftw_complex' array, Chris@10: the overall stride of the transform should be 2, the distance between Chris@10: consecutive real parts or between consecutive imaginary parts; see Chris@10: *note Guru vector and transform sizes::. Note that the dimension Chris@10: strides are applied equally to the real and imaginary parts; real and Chris@10: imaginary arrays with different strides are not supported. Chris@10: Chris@10: There is no `sign' parameter in `fftw_plan_guru_split_dft'. This Chris@10: function always plans for an `FFTW_FORWARD' transform. To plan for an Chris@10: `FFTW_BACKWARD' transform, you can exploit the identity that the Chris@10: backwards DFT is equal to the forwards DFT with the real and imaginary Chris@10: parts swapped. For example, in the case of the `fftw_complex' arrays Chris@10: above, the `FFTW_BACKWARD' transform is computed by the parameters: Chris@10: Chris@10: ri = (double *) in + 1; Chris@10: ii = (double *) in; Chris@10: ro = (double *) out + 1; Chris@10: io = (double *) out; Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Guru Real-data DFTs, Next: Guru Real-to-real Transforms, Prev: Guru Complex DFTs, Up: Guru Interface Chris@10: Chris@10: 4.5.4 Guru Real-data DFTs Chris@10: ------------------------- Chris@10: Chris@10: fftw_plan fftw_plan_guru_dft_r2c( Chris@10: int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, const fftw_iodim *howmany_dims, Chris@10: double *in, fftw_complex *out, Chris@10: unsigned flags); Chris@10: Chris@10: fftw_plan fftw_plan_guru_split_dft_r2c( Chris@10: int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, const fftw_iodim *howmany_dims, Chris@10: double *in, double *ro, double *io, Chris@10: unsigned flags); Chris@10: Chris@10: fftw_plan fftw_plan_guru_dft_c2r( Chris@10: int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, const fftw_iodim *howmany_dims, Chris@10: fftw_complex *in, double *out, Chris@10: unsigned flags); Chris@10: Chris@10: fftw_plan fftw_plan_guru_split_dft_c2r( Chris@10: int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, const fftw_iodim *howmany_dims, Chris@10: double *ri, double *ii, double *out, Chris@10: unsigned flags); Chris@10: Chris@10: Plan a real-input (r2c) or real-output (c2r), multi-dimensional DFT Chris@10: with transform dimensions given by (`rank', `dims') over a Chris@10: multi-dimensional vector (loop) of dimensions (`howmany_rank', Chris@10: `howmany_dims'). `dims' and `howmany_dims' should point to Chris@10: `fftw_iodim' arrays of length `rank' and `howmany_rank', respectively. Chris@10: As for the basic and advanced interfaces, an r2c transform is Chris@10: `FFTW_FORWARD' and a c2r transform is `FFTW_BACKWARD'. Chris@10: Chris@10: The _last_ dimension of `dims' is interpreted specially: that Chris@10: dimension of the real array has size `dims[rank-1].n', but that Chris@10: dimension of the complex array has size `dims[rank-1].n/2+1' (division Chris@10: rounded down). The strides, on the other hand, are taken to be exactly Chris@10: as specified. It is up to the user to specify the strides Chris@10: appropriately for the peculiar dimensions of the data, and we do not Chris@10: guarantee that the planner will succeed (return non-`NULL') for any Chris@10: dimensions other than those described in *note Real-data DFT Array Chris@10: Format:: and generalized in *note Advanced Real-data DFTs::. (That is, Chris@10: for an in-place transform, each individual dimension should be able to Chris@10: operate in place.) Chris@10: Chris@10: `in' and `out' point to the input and output arrays for r2c and c2r Chris@10: transforms, respectively. For split arrays, `ri' and `ii' point to the Chris@10: real and imaginary input arrays for a c2r transform, and `ro' and `io' Chris@10: point to the real and imaginary output arrays for an r2c transform. Chris@10: `in' and `ro' or `ri' and `out' may be the same, indicating an in-place Chris@10: transform. (In-place transforms where `in' and `io' or `ii' and `out' Chris@10: are the same are not currently supported.) Chris@10: Chris@10: `flags' is a bitwise OR (`|') of zero or more planner flags, as Chris@10: defined in *note Planner Flags::. Chris@10: Chris@10: In-place transforms of rank greater than 1 are currently only Chris@10: supported for interleaved arrays. For split arrays, the planner will Chris@10: return `NULL'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Guru Real-to-real Transforms, Next: 64-bit Guru Interface, Prev: Guru Real-data DFTs, Up: Guru Interface Chris@10: Chris@10: 4.5.5 Guru Real-to-real Transforms Chris@10: ---------------------------------- Chris@10: Chris@10: fftw_plan fftw_plan_guru_r2r(int rank, const fftw_iodim *dims, Chris@10: int howmany_rank, Chris@10: const fftw_iodim *howmany_dims, Chris@10: double *in, double *out, Chris@10: const fftw_r2r_kind *kind, Chris@10: unsigned flags); Chris@10: Chris@10: Plan a real-to-real (r2r) multi-dimensional `FFTW_FORWARD' transform Chris@10: with transform dimensions given by (`rank', `dims') over a Chris@10: multi-dimensional vector (loop) of dimensions (`howmany_rank', Chris@10: `howmany_dims'). `dims' and `howmany_dims' should point to Chris@10: `fftw_iodim' arrays of length `rank' and `howmany_rank', respectively. Chris@10: Chris@10: The transform kind of each dimension is given by the `kind' Chris@10: parameter, which should point to an array of length `rank'. Valid Chris@10: `fftw_r2r_kind' constants are given in *note Real-to-Real Transform Chris@10: Kinds::. Chris@10: Chris@10: `in' and `out' point to the real input and output arrays; they may Chris@10: be the same, indicating an in-place transform. Chris@10: Chris@10: `flags' is a bitwise OR (`|') of zero or more planner flags, as Chris@10: defined in *note Planner Flags::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: 64-bit Guru Interface, Prev: Guru Real-to-real Transforms, Up: Guru Interface Chris@10: Chris@10: 4.5.6 64-bit Guru Interface Chris@10: --------------------------- Chris@10: Chris@10: When compiled in 64-bit mode on a 64-bit architecture (where addresses Chris@10: are 64 bits wide), FFTW uses 64-bit quantities internally for all Chris@10: transform sizes, strides, and so on--you don't have to do anything Chris@10: special to exploit this. However, in the ordinary FFTW interfaces, you Chris@10: specify the transform size by an `int' quantity, which is normally only Chris@10: 32 bits wide. This means that, even though FFTW is using 64-bit sizes Chris@10: internally, you cannot specify a single transform dimension larger than Chris@10: 2^31-1 numbers. Chris@10: Chris@10: We expect that few users will require transforms larger than this, Chris@10: but, for those who do, we provide a 64-bit version of the guru Chris@10: interface in which all sizes are specified as integers of type Chris@10: `ptrdiff_t' instead of `int'. (`ptrdiff_t' is a signed integer type Chris@10: defined by the C standard to be wide enough to represent address Chris@10: differences, and thus must be at least 64 bits wide on a 64-bit Chris@10: machine.) We stress that there is _no performance advantage_ to using Chris@10: this interface--the same internal FFTW code is employed regardless--and Chris@10: it is only necessary if you want to specify very large transform sizes. Chris@10: Chris@10: In particular, the 64-bit guru interface is a set of planner routines Chris@10: that are exactly the same as the guru planner routines, except that Chris@10: they are named with `guru64' instead of `guru' and they take arguments Chris@10: of type `fftw_iodim64' instead of `fftw_iodim'. For example, instead Chris@10: of `fftw_plan_guru_dft', we have `fftw_plan_guru64_dft'. Chris@10: Chris@10: fftw_plan fftw_plan_guru64_dft( Chris@10: int rank, const fftw_iodim64 *dims, Chris@10: int howmany_rank, const fftw_iodim64 *howmany_dims, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: int sign, unsigned flags); Chris@10: Chris@10: The `fftw_iodim64' type is similar to `fftw_iodim', with the same Chris@10: interpretation, except that it uses type `ptrdiff_t' instead of type Chris@10: `int'. Chris@10: Chris@10: typedef struct { Chris@10: ptrdiff_t n; Chris@10: ptrdiff_t is; Chris@10: ptrdiff_t os; Chris@10: } fftw_iodim64; Chris@10: Chris@10: Every other `fftw_plan_guru' function also has a `fftw_plan_guru64' Chris@10: equivalent, but we do not repeat their documentation here since they Chris@10: are identical to the 32-bit versions except as noted above. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: New-array Execute Functions, Next: Wisdom, Prev: Guru Interface, Up: FFTW Reference Chris@10: Chris@10: 4.6 New-array Execute Functions Chris@10: =============================== Chris@10: Chris@10: Normally, one executes a plan for the arrays with which the plan was Chris@10: created, by calling `fftw_execute(plan)' as described in *note Using Chris@10: Plans::. However, it is possible for sophisticated users to apply a Chris@10: given plan to a _different_ array using the "new-array execute" Chris@10: functions detailed below, provided that the following conditions are Chris@10: met: Chris@10: Chris@10: * The array size, strides, etcetera are the same (since those are Chris@10: set by the plan). Chris@10: Chris@10: * The input and output arrays are the same (in-place) or different Chris@10: (out-of-place) if the plan was originally created to be in-place or Chris@10: out-of-place, respectively. Chris@10: Chris@10: * For split arrays, the separations between the real and imaginary Chris@10: parts, `ii-ri' and `io-ro', are the same as they were for the Chris@10: input and output arrays when the plan was created. (This Chris@10: condition is automatically satisfied for interleaved arrays.) Chris@10: Chris@10: * The "alignment" of the new input/output arrays is the same as that Chris@10: of the input/output arrays when the plan was created, unless the Chris@10: plan was created with the `FFTW_UNALIGNED' flag. Here, the Chris@10: alignment is a platform-dependent quantity (for example, it is the Chris@10: address modulo 16 if SSE SIMD instructions are used, but the Chris@10: address modulo 4 for non-SIMD single-precision FFTW on the same Chris@10: machine). In general, only arrays allocated with `fftw_malloc' Chris@10: are guaranteed to be equally aligned (*note SIMD alignment and Chris@10: fftw_malloc::). Chris@10: Chris@10: Chris@10: The alignment issue is especially critical, because if you don't use Chris@10: `fftw_malloc' then you may have little control over the alignment of Chris@10: arrays in memory. For example, neither the C++ `new' function nor the Chris@10: Fortran `allocate' statement provide strong enough guarantees about Chris@10: data alignment. If you don't use `fftw_malloc', therefore, you Chris@10: probably have to use `FFTW_UNALIGNED' (which disables most SIMD Chris@10: support). If possible, it is probably better for you to simply create Chris@10: multiple plans (creating a new plan is quick once one exists for a Chris@10: given size), or better yet re-use the same array for your transforms. Chris@10: Chris@10: If you are tempted to use the new-array execute interface because you Chris@10: want to transform a known bunch of arrays of the same size, you should Chris@10: probably go use the advanced interface instead (*note Advanced Chris@10: Interface::)). Chris@10: Chris@10: The new-array execute functions are: Chris@10: Chris@10: void fftw_execute_dft( Chris@10: const fftw_plan p, Chris@10: fftw_complex *in, fftw_complex *out); Chris@10: Chris@10: void fftw_execute_split_dft( Chris@10: const fftw_plan p, Chris@10: double *ri, double *ii, double *ro, double *io); Chris@10: Chris@10: void fftw_execute_dft_r2c( Chris@10: const fftw_plan p, Chris@10: double *in, fftw_complex *out); Chris@10: Chris@10: void fftw_execute_split_dft_r2c( Chris@10: const fftw_plan p, Chris@10: double *in, double *ro, double *io); Chris@10: Chris@10: void fftw_execute_dft_c2r( Chris@10: const fftw_plan p, Chris@10: fftw_complex *in, double *out); Chris@10: Chris@10: void fftw_execute_split_dft_c2r( Chris@10: const fftw_plan p, Chris@10: double *ri, double *ii, double *out); Chris@10: Chris@10: void fftw_execute_r2r( Chris@10: const fftw_plan p, Chris@10: double *in, double *out); Chris@10: Chris@10: These execute the `plan' to compute the corresponding transform on Chris@10: the input/output arrays specified by the subsequent arguments. The Chris@10: input/output array arguments have the same meanings as the ones passed Chris@10: to the guru planner routines in the preceding sections. The `plan' is Chris@10: not modified, and these routines can be called as many times as Chris@10: desired, or intermixed with calls to the ordinary `fftw_execute'. Chris@10: Chris@10: The `plan' _must_ have been created for the transform type Chris@10: corresponding to the execute function, e.g. it must be a complex-DFT Chris@10: plan for `fftw_execute_dft'. Any of the planner routines for that Chris@10: transform type, from the basic to the guru interface, could have been Chris@10: used to create the plan, however. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom, Next: What FFTW Really Computes, Prev: New-array Execute Functions, Up: FFTW Reference Chris@10: Chris@10: 4.7 Wisdom Chris@10: ========== Chris@10: Chris@10: This section documents the FFTW mechanism for saving and restoring Chris@10: plans from disk. This mechanism is called "wisdom". Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Wisdom Export:: Chris@10: * Wisdom Import:: Chris@10: * Forgetting Wisdom:: Chris@10: * Wisdom Utilities:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom Export, Next: Wisdom Import, Prev: Wisdom, Up: Wisdom Chris@10: Chris@10: 4.7.1 Wisdom Export Chris@10: ------------------- Chris@10: Chris@10: int fftw_export_wisdom_to_filename(const char *filename); Chris@10: void fftw_export_wisdom_to_file(FILE *output_file); Chris@10: char *fftw_export_wisdom_to_string(void); Chris@10: void fftw_export_wisdom(void (*write_char)(char c, void *), void *data); Chris@10: Chris@10: These functions allow you to export all currently accumulated wisdom Chris@10: in a form from which it can be later imported and restored, even during Chris@10: a separate run of the program. (*Note Words of Wisdom-Saving Plans::.) Chris@10: The current store of wisdom is not affected by calling any of these Chris@10: routines. Chris@10: Chris@10: `fftw_export_wisdom' exports the wisdom to any output medium, as Chris@10: specified by the callback function `write_char'. `write_char' is a Chris@10: `putc'-like function that writes the character `c' to some output; its Chris@10: second parameter is the `data' pointer passed to `fftw_export_wisdom'. Chris@10: For convenience, the following three "wrapper" routines are provided: Chris@10: Chris@10: `fftw_export_wisdom_to_filename' writes wisdom to a file named Chris@10: `filename' (which is created or overwritten), returning `1' on success Chris@10: and `0' on failure. A lower-level function, which requires you to open Chris@10: and close the file yourself (e.g. if you want to write wisdom to a Chris@10: portion of a larger file) is `fftw_export_wisdom_to_file'. This writes Chris@10: the wisdom to the current position in `output_file', which should be Chris@10: open with write permission; upon exit, the file remains open and is Chris@10: positioned at the end of the wisdom data. Chris@10: Chris@10: `fftw_export_wisdom_to_string' returns a pointer to a Chris@10: `NULL'-terminated string holding the wisdom data. This string is Chris@10: dynamically allocated, and it is the responsibility of the caller to Chris@10: deallocate it with `free' when it is no longer needed. Chris@10: Chris@10: All of these routines export the wisdom in the same format, which we Chris@10: will not document here except to say that it is LISP-like ASCII text Chris@10: that is insensitive to white space. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom Import, Next: Forgetting Wisdom, Prev: Wisdom Export, Up: Wisdom Chris@10: Chris@10: 4.7.2 Wisdom Import Chris@10: ------------------- Chris@10: Chris@10: int fftw_import_system_wisdom(void); Chris@10: int fftw_import_wisdom_from_filename(const char *filename); Chris@10: int fftw_import_wisdom_from_string(const char *input_string); Chris@10: int fftw_import_wisdom(int (*read_char)(void *), void *data); Chris@10: Chris@10: These functions import wisdom into a program from data stored by the Chris@10: `fftw_export_wisdom' functions above. (*Note Words of Wisdom-Saving Chris@10: Plans::.) The imported wisdom replaces any wisdom already accumulated Chris@10: by the running program. Chris@10: Chris@10: `fftw_import_wisdom' imports wisdom from any input medium, as Chris@10: specified by the callback function `read_char'. `read_char' is a Chris@10: `getc'-like function that returns the next character in the input; its Chris@10: parameter is the `data' pointer passed to `fftw_import_wisdom'. If the Chris@10: end of the input data is reached (which should never happen for valid Chris@10: data), `read_char' should return `EOF' (as defined in `'). Chris@10: For convenience, the following three "wrapper" routines are provided: Chris@10: Chris@10: `fftw_import_wisdom_from_filename' reads wisdom from a file named Chris@10: `filename'. A lower-level function, which requires you to open and Chris@10: close the file yourself (e.g. if you want to read wisdom from a portion Chris@10: of a larger file) is `fftw_import_wisdom_from_file'. This reads wisdom Chris@10: from the current position in `input_file' (which should be open with Chris@10: read permission); upon exit, the file remains open, but the position of Chris@10: the read pointer is unspecified. Chris@10: Chris@10: `fftw_import_wisdom_from_string' reads wisdom from the Chris@10: `NULL'-terminated string `input_string'. Chris@10: Chris@10: `fftw_import_system_wisdom' reads wisdom from an Chris@10: implementation-defined standard file (`/etc/fftw/wisdom' on Unix and Chris@10: GNU systems). Chris@10: Chris@10: The return value of these import routines is `1' if the wisdom was Chris@10: read successfully and `0' otherwise. Note that, in all of these Chris@10: functions, any data in the input stream past the end of the wisdom data Chris@10: is simply ignored. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Forgetting Wisdom, Next: Wisdom Utilities, Prev: Wisdom Import, Up: Wisdom Chris@10: Chris@10: 4.7.3 Forgetting Wisdom Chris@10: ----------------------- Chris@10: Chris@10: void fftw_forget_wisdom(void); Chris@10: Chris@10: Calling `fftw_forget_wisdom' causes all accumulated `wisdom' to be Chris@10: discarded and its associated memory to be freed. (New `wisdom' can Chris@10: still be gathered subsequently, however.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom Utilities, Prev: Forgetting Wisdom, Up: Wisdom Chris@10: Chris@10: 4.7.4 Wisdom Utilities Chris@10: ---------------------- Chris@10: Chris@10: FFTW includes two standalone utility programs that deal with wisdom. We Chris@10: merely summarize them here, since they come with their own `man' pages Chris@10: for Unix and GNU systems (with HTML versions on our web site). Chris@10: Chris@10: The first program is `fftw-wisdom' (or `fftwf-wisdom' in single Chris@10: precision, etcetera), which can be used to create a wisdom file Chris@10: containing plans for any of the transform sizes and types supported by Chris@10: FFTW. It is preferable to create wisdom directly from your executable Chris@10: (*note Caveats in Using Wisdom::), but this program is useful for Chris@10: creating global wisdom files for `fftw_import_system_wisdom'. Chris@10: Chris@10: The second program is `fftw-wisdom-to-conf', which takes a wisdom Chris@10: file as input and produces a "configuration routine" as output. The Chris@10: latter is a C subroutine that you can compile and link into your Chris@10: program, replacing a routine of the same name in the FFTW library, that Chris@10: determines which parts of FFTW are callable by your program. Chris@10: `fftw-wisdom-to-conf' produces a configuration routine that links to Chris@10: only those parts of FFTW needed by the saved plans in the wisdom, Chris@10: greatly reducing the size of statically linked executables (which should Chris@10: only attempt to create plans corresponding to those in the wisdom, Chris@10: however). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: What FFTW Really Computes, Prev: Wisdom, Up: FFTW Reference Chris@10: Chris@10: 4.8 What FFTW Really Computes Chris@10: ============================= Chris@10: Chris@10: In this section, we provide precise mathematical definitions for the Chris@10: transforms that FFTW computes. These transform definitions are fairly Chris@10: standard, but some authors follow slightly different conventions for the Chris@10: normalization of the transform (the constant factor in front) and the Chris@10: sign of the complex exponent. We begin by presenting the Chris@10: one-dimensional (1d) transform definitions, and then give the Chris@10: straightforward extension to multi-dimensional transforms. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * The 1d Discrete Fourier Transform (DFT):: Chris@10: * The 1d Real-data DFT:: Chris@10: * 1d Real-even DFTs (DCTs):: Chris@10: * 1d Real-odd DFTs (DSTs):: Chris@10: * 1d Discrete Hartley Transforms (DHTs):: Chris@10: * Multi-dimensional Transforms:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: The 1d Discrete Fourier Transform (DFT), Next: The 1d Real-data DFT, Prev: What FFTW Really Computes, Up: What FFTW Really Computes Chris@10: Chris@10: 4.8.1 The 1d Discrete Fourier Transform (DFT) Chris@10: --------------------------------------------- Chris@10: Chris@10: The forward (`FFTW_FORWARD') discrete Fourier transform (DFT) of a 1d Chris@10: complex array X of size n computes an array Y, where: Y[k] = sum for j = 0 to (n - 1) of X[j] * exp(-2 pi j k sqrt(-1)/n) . Chris@10: The backward (`FFTW_BACKWARD') DFT computes: Y[k] = sum for j = 0 to (n - 1) of X[j] * exp(2 pi j k sqrt(-1)/n) . Chris@10: FFTW computes an unnormalized transform, in that there is no Chris@10: coefficient in front of the summation in the DFT. In other words, Chris@10: applying the forward and then the backward transform will multiply the Chris@10: input by n. Chris@10: Chris@10: From above, an `FFTW_FORWARD' transform corresponds to a sign of -1 Chris@10: in the exponent of the DFT. Note also that we use the standard Chris@10: "in-order" output ordering--the k-th output corresponds to the Chris@10: frequency k/n (or k/T, where T is your total sampling period). For Chris@10: those who like to think in terms of positive and negative frequencies, Chris@10: this means that the positive frequencies are stored in the first half Chris@10: of the output and the negative frequencies are stored in backwards Chris@10: order in the second half of the output. (The frequency -k/n is the Chris@10: same as the frequency (n-k)/n.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: The 1d Real-data DFT, Next: 1d Real-even DFTs (DCTs), Prev: The 1d Discrete Fourier Transform (DFT), Up: What FFTW Really Computes Chris@10: Chris@10: 4.8.2 The 1d Real-data DFT Chris@10: -------------------------- Chris@10: Chris@10: The real-input (r2c) DFT in FFTW computes the _forward_ transform Y of Chris@10: the size `n' real array X, exactly as defined above, i.e. Y[k] = sum for j = 0 to (n - 1) of X[j] * exp(-2 pi j k sqrt(-1)/n) . Chris@10: This output array Y can easily be shown to possess the "Hermitian" Chris@10: symmetry Y[k] = Y[n-k]*, where we take Y to be periodic so that Y[n] = Chris@10: Y[0]. Chris@10: Chris@10: As a result of this symmetry, half of the output Y is redundant Chris@10: (being the complex conjugate of the other half), and so the 1d r2c Chris@10: transforms only output elements 0...n/2 of Y (n/2+1 complex numbers), Chris@10: where the division by 2 is rounded down. Chris@10: Chris@10: Moreover, the Hermitian symmetry implies that Y[0] and, if n is Chris@10: even, the Y[n/2] element, are purely real. So, for the `R2HC' r2r Chris@10: transform, these elements are not stored in the halfcomplex output Chris@10: format. Chris@10: Chris@10: The c2r and `H2RC' r2r transforms compute the backward DFT of the Chris@10: _complex_ array X with Hermitian symmetry, stored in the r2c/`R2HC' Chris@10: output formats, respectively, where the backward transform is defined Chris@10: exactly as for the complex case: Y[k] = sum for j = 0 to (n - 1) of X[j] * exp(2 pi j k sqrt(-1)/n) . Chris@10: The outputs `Y' of this transform can easily be seen to be purely Chris@10: real, and are stored as an array of real numbers. Chris@10: Chris@10: Like FFTW's complex DFT, these transforms are unnormalized. In other Chris@10: words, applying the real-to-complex (forward) and then the Chris@10: complex-to-real (backward) transform will multiply the input by n. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: 1d Real-even DFTs (DCTs), Next: 1d Real-odd DFTs (DSTs), Prev: The 1d Real-data DFT, Up: What FFTW Really Computes Chris@10: Chris@10: 4.8.3 1d Real-even DFTs (DCTs) Chris@10: ------------------------------ Chris@10: Chris@10: The Real-even symmetry DFTs in FFTW are exactly equivalent to the Chris@10: unnormalized forward (and backward) DFTs as defined above, where the Chris@10: input array X of length N is purely real and is also "even" symmetry. Chris@10: In this case, the output array is likewise real and even symmetry. Chris@10: Chris@10: For the case of `REDFT00', this even symmetry means that X[j] = Chris@10: X[N-j], where we take X to be periodic so that X[N] = X[0]. Because of Chris@10: this redundancy, only the first n real numbers are actually stored, Chris@10: where N = 2(n-1). Chris@10: Chris@10: The proper definition of even symmetry for `REDFT10', `REDFT01', and Chris@10: `REDFT11' transforms is somewhat more intricate because of the shifts Chris@10: by 1/2 of the input and/or output, although the corresponding boundary Chris@10: conditions are given in *note Real even/odd DFTs (cosine/sine Chris@10: transforms)::. Because of the even symmetry, however, the sine terms Chris@10: in the DFT all cancel and the remaining cosine terms are written Chris@10: explicitly below. This formulation often leads people to call such a Chris@10: transform a "discrete cosine transform" (DCT), although it is really Chris@10: just a special case of the DFT. Chris@10: Chris@10: In each of the definitions below, we transform a real array X of Chris@10: length n to a real array Y of length n: Chris@10: Chris@10: REDFT00 (DCT-I) Chris@10: ............... Chris@10: Chris@10: An `REDFT00' transform (type-I DCT) in FFTW is defined by: Y[k] = X[0] Chris@10: + (-1)^k X[n-1] + 2 (sum for j = 1 to n-2 of X[j] cos(pi jk /(n-1))). Chris@10: Note that this transform is not defined for n=1. For n=2, the Chris@10: summation term above is dropped as you might expect. Chris@10: Chris@10: REDFT10 (DCT-II) Chris@10: ................ Chris@10: Chris@10: An `REDFT10' transform (type-II DCT, sometimes called "the" DCT) in Chris@10: FFTW is defined by: Y[k] = 2 (sum for j = 0 to n-1 of X[j] cos(pi Chris@10: (j+1/2) k / n)). Chris@10: Chris@10: REDFT01 (DCT-III) Chris@10: ................. Chris@10: Chris@10: An `REDFT01' transform (type-III DCT) in FFTW is defined by: Y[k] = Chris@10: X[0] + 2 (sum for j = 1 to n-1 of X[j] cos(pi j (k+1/2) / n)). In the Chris@10: case of n=1, this reduces to Y[0] = X[0]. Up to a scale factor (see Chris@10: below), this is the inverse of `REDFT10' ("the" DCT), and so the Chris@10: `REDFT01' (DCT-III) is sometimes called the "IDCT". Chris@10: Chris@10: REDFT11 (DCT-IV) Chris@10: ................ Chris@10: Chris@10: An `REDFT11' transform (type-IV DCT) in FFTW is defined by: Y[k] = 2 Chris@10: (sum for j = 0 to n-1 of X[j] cos(pi (j+1/2) (k+1/2) / n)). Chris@10: Chris@10: Inverses and Normalization Chris@10: .......................... Chris@10: Chris@10: These definitions correspond directly to the unnormalized DFTs used Chris@10: elsewhere in FFTW (hence the factors of 2 in front of the summations). Chris@10: The unnormalized inverse of `REDFT00' is `REDFT00', of `REDFT10' is Chris@10: `REDFT01' and vice versa, and of `REDFT11' is `REDFT11'. Each Chris@10: unnormalized inverse results in the original array multiplied by N, Chris@10: where N is the _logical_ DFT size. For `REDFT00', N=2(n-1) (note that Chris@10: n=1 is not defined); otherwise, N=2n. Chris@10: Chris@10: In defining the discrete cosine transform, some authors also include Chris@10: additional factors of sqrt(2) (or its inverse) multiplying selected Chris@10: inputs and/or outputs. This is a mostly cosmetic change that makes the Chris@10: transform orthogonal, but sacrifices the direct equivalence to a Chris@10: symmetric DFT. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: 1d Real-odd DFTs (DSTs), Next: 1d Discrete Hartley Transforms (DHTs), Prev: 1d Real-even DFTs (DCTs), Up: What FFTW Really Computes Chris@10: Chris@10: 4.8.4 1d Real-odd DFTs (DSTs) Chris@10: ----------------------------- Chris@10: Chris@10: The Real-odd symmetry DFTs in FFTW are exactly equivalent to the Chris@10: unnormalized forward (and backward) DFTs as defined above, where the Chris@10: input array X of length N is purely real and is also "odd" symmetry. In Chris@10: this case, the output is odd symmetry and purely imaginary. Chris@10: Chris@10: For the case of `RODFT00', this odd symmetry means that X[j] = Chris@10: -X[N-j], where we take X to be periodic so that X[N] = X[0]. Because Chris@10: of this redundancy, only the first n real numbers starting at j=1 are Chris@10: actually stored (the j=0 element is zero), where N = 2(n+1). Chris@10: Chris@10: The proper definition of odd symmetry for `RODFT10', `RODFT01', and Chris@10: `RODFT11' transforms is somewhat more intricate because of the shifts Chris@10: by 1/2 of the input and/or output, although the corresponding boundary Chris@10: conditions are given in *note Real even/odd DFTs (cosine/sine Chris@10: transforms)::. Because of the odd symmetry, however, the cosine terms Chris@10: in the DFT all cancel and the remaining sine terms are written Chris@10: explicitly below. This formulation often leads people to call such a Chris@10: transform a "discrete sine transform" (DST), although it is really just Chris@10: a special case of the DFT. Chris@10: Chris@10: In each of the definitions below, we transform a real array X of Chris@10: length n to a real array Y of length n: Chris@10: Chris@10: RODFT00 (DST-I) Chris@10: ............... Chris@10: Chris@10: An `RODFT00' transform (type-I DST) in FFTW is defined by: Y[k] = 2 Chris@10: (sum for j = 0 to n-1 of X[j] sin(pi (j+1)(k+1) / (n+1))). Chris@10: Chris@10: RODFT10 (DST-II) Chris@10: ................ Chris@10: Chris@10: An `RODFT10' transform (type-II DST) in FFTW is defined by: Y[k] = 2 Chris@10: (sum for j = 0 to n-1 of X[j] sin(pi (j+1/2) (k+1) / n)). Chris@10: Chris@10: RODFT01 (DST-III) Chris@10: ................. Chris@10: Chris@10: An `RODFT01' transform (type-III DST) in FFTW is defined by: Y[k] = Chris@10: (-1)^k X[n-1] + 2 (sum for j = 0 to n-2 of X[j] sin(pi (j+1) (k+1/2) / Chris@10: n)). In the case of n=1, this reduces to Y[0] = X[0]. Chris@10: Chris@10: RODFT11 (DST-IV) Chris@10: ................ Chris@10: Chris@10: An `RODFT11' transform (type-IV DST) in FFTW is defined by: Y[k] = 2 Chris@10: (sum for j = 0 to n-1 of X[j] sin(pi (j+1/2) (k+1/2) / n)). Chris@10: Chris@10: Inverses and Normalization Chris@10: .......................... Chris@10: Chris@10: These definitions correspond directly to the unnormalized DFTs used Chris@10: elsewhere in FFTW (hence the factors of 2 in front of the summations). Chris@10: The unnormalized inverse of `RODFT00' is `RODFT00', of `RODFT10' is Chris@10: `RODFT01' and vice versa, and of `RODFT11' is `RODFT11'. Each Chris@10: unnormalized inverse results in the original array multiplied by N, Chris@10: where N is the _logical_ DFT size. For `RODFT00', N=2(n+1); otherwise, Chris@10: N=2n. Chris@10: Chris@10: In defining the discrete sine transform, some authors also include Chris@10: additional factors of sqrt(2) (or its inverse) multiplying selected Chris@10: inputs and/or outputs. This is a mostly cosmetic change that makes the Chris@10: transform orthogonal, but sacrifices the direct equivalence to an Chris@10: antisymmetric DFT. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: 1d Discrete Hartley Transforms (DHTs), Next: Multi-dimensional Transforms, Prev: 1d Real-odd DFTs (DSTs), Up: What FFTW Really Computes Chris@10: Chris@10: 4.8.5 1d Discrete Hartley Transforms (DHTs) Chris@10: ------------------------------------------- Chris@10: Chris@10: The discrete Hartley transform (DHT) of a 1d real array X of size n Chris@10: computes a real array Y of the same size, where: Y[k] = sum for j = 0 to (n - 1) of X[j] * [cos(2 pi j k / n) + sin(2 pi j k / n)]. Chris@10: FFTW computes an unnormalized transform, in that there is no Chris@10: coefficient in front of the summation in the DHT. In other words, Chris@10: applying the transform twice (the DHT is its own inverse) will multiply Chris@10: the input by n. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Multi-dimensional Transforms, Prev: 1d Discrete Hartley Transforms (DHTs), Up: What FFTW Really Computes Chris@10: Chris@10: 4.8.6 Multi-dimensional Transforms Chris@10: ---------------------------------- Chris@10: Chris@10: The multi-dimensional transforms of FFTW, in general, compute simply the Chris@10: separable product of the given 1d transform along each dimension of the Chris@10: array. Since each of these transforms is unnormalized, computing the Chris@10: forward followed by the backward/inverse multi-dimensional transform Chris@10: will result in the original array scaled by the product of the Chris@10: normalization factors for each dimension (e.g. the product of the Chris@10: dimension sizes, for a multi-dimensional DFT). Chris@10: Chris@10: The definition of FFTW's multi-dimensional DFT of real data (r2c) Chris@10: deserves special attention. In this case, we logically compute the full Chris@10: multi-dimensional DFT of the input data; since the input data are purely Chris@10: real, the output data have the Hermitian symmetry and therefore only one Chris@10: non-redundant half need be stored. More specifically, for an n[0] x Chris@10: n[1] x n[2] x ... x n[d-1] multi-dimensional real-input DFT, the full Chris@10: (logical) complex output array Y[k[0], k[1], ..., k[d-1]] has the Chris@10: symmetry: Y[k[0], k[1], ..., k[d-1]] = Y[n[0] - k[0], n[1] - k[1], ..., Chris@10: n[d-1] - k[d-1]]* (where each dimension is periodic). Because of this Chris@10: symmetry, we only store the k[d-1] = 0...n[d-1]/2 elements of the Chris@10: _last_ dimension (division by 2 is rounded down). (We could instead Chris@10: have cut any other dimension in half, but the last dimension proved Chris@10: computationally convenient.) This results in the peculiar array format Chris@10: described in more detail by *note Real-data DFT Array Format::. Chris@10: Chris@10: The multi-dimensional c2r transform is simply the unnormalized Chris@10: inverse of the r2c transform. i.e. it is the same as FFTW's complex Chris@10: backward multi-dimensional DFT, operating on a Hermitian input array in Chris@10: the peculiar format mentioned above and outputting a real array (since Chris@10: the DFT output is purely real). Chris@10: Chris@10: We should remind the user that the separable product of 1d transforms Chris@10: along each dimension, as computed by FFTW, is not always the same thing Chris@10: as the usual multi-dimensional transform. A multi-dimensional `R2HC' Chris@10: (or `HC2R') transform is not identical to the multi-dimensional DFT, Chris@10: requiring some post-processing to combine the requisite real and Chris@10: imaginary parts, as was described in *note The Halfcomplex-format Chris@10: DFT::. Likewise, FFTW's multidimensional `FFTW_DHT' r2r transform is Chris@10: not the same thing as the logical multi-dimensional discrete Hartley Chris@10: transform defined in the literature, as discussed in *note The Discrete Chris@10: Hartley Transform::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Multi-threaded FFTW, Next: Distributed-memory FFTW with MPI, Prev: FFTW Reference, Up: Top Chris@10: Chris@10: 5 Multi-threaded FFTW Chris@10: ********************* Chris@10: Chris@10: In this chapter we document the parallel FFTW routines for Chris@10: shared-memory parallel hardware. These routines, which support Chris@10: parallel one- and multi-dimensional transforms of both real and complex Chris@10: data, are the easiest way to take advantage of multiple processors with Chris@10: FFTW. They work just like the corresponding uniprocessor transform Chris@10: routines, except that you have an extra initialization routine to call, Chris@10: and there is a routine to set the number of threads to employ. Any Chris@10: program that uses the uniprocessor FFTW can therefore be trivially Chris@10: modified to use the multi-threaded FFTW. Chris@10: Chris@10: A shared-memory machine is one in which all CPUs can directly access Chris@10: the same main memory, and such machines are now common due to the Chris@10: ubiquity of multi-core CPUs. FFTW's multi-threading support allows you Chris@10: to utilize these additional CPUs transparently from a single program. Chris@10: However, this does not necessarily translate into performance Chris@10: gains--when multiple threads/CPUs are employed, there is an overhead Chris@10: required for synchronization that may outweigh the computatational Chris@10: parallelism. Therefore, you can only benefit from threads if your Chris@10: problem is sufficiently large. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Installation and Supported Hardware/Software:: Chris@10: * Usage of Multi-threaded FFTW:: Chris@10: * How Many Threads to Use?:: Chris@10: * Thread safety:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Installation and Supported Hardware/Software, Next: Usage of Multi-threaded FFTW, Prev: Multi-threaded FFTW, Up: Multi-threaded FFTW Chris@10: Chris@10: 5.1 Installation and Supported Hardware/Software Chris@10: ================================================ Chris@10: Chris@10: All of the FFTW threads code is located in the `threads' subdirectory Chris@10: of the FFTW package. On Unix systems, the FFTW threads libraries and Chris@10: header files can be automatically configured, compiled, and installed Chris@10: along with the uniprocessor FFTW libraries simply by including Chris@10: `--enable-threads' in the flags to the `configure' script (*note Chris@10: Installation on Unix::), or `--enable-openmp' to use OpenMP Chris@10: (http://www.openmp.org) threads. Chris@10: Chris@10: The threads routines require your operating system to have some sort Chris@10: of shared-memory threads support. Specifically, the FFTW threads Chris@10: package works with POSIX threads (available on most Unix variants, from Chris@10: GNU/Linux to MacOS X) and Win32 threads. OpenMP threads, which are Chris@10: supported in many common compilers (e.g. gcc) are also supported, and Chris@10: may give better performance on some systems. (OpenMP threads are also Chris@10: useful if you are employing OpenMP in your own code, in order to Chris@10: minimize conflicts between threading models.) If you have a Chris@10: shared-memory machine that uses a different threads API, it should be a Chris@10: simple matter of programming to include support for it; see the file Chris@10: `threads/threads.c' for more detail. Chris@10: Chris@10: You can compile FFTW with _both_ `--enable-threads' and Chris@10: `--enable-openmp' at the same time, since they install libraries with Chris@10: different names (`fftw3_threads' and `fftw3_omp', as described below). Chris@10: However, your programs may only link to _one_ of these two libraries at Chris@10: a time. Chris@10: Chris@10: Ideally, of course, you should also have multiple processors in Chris@10: order to get any benefit from the threaded transforms. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Usage of Multi-threaded FFTW, Next: How Many Threads to Use?, Prev: Installation and Supported Hardware/Software, Up: Multi-threaded FFTW Chris@10: Chris@10: 5.2 Usage of Multi-threaded FFTW Chris@10: ================================ Chris@10: Chris@10: Here, it is assumed that the reader is already familiar with the usage Chris@10: of the uniprocessor FFTW routines, described elsewhere in this manual. Chris@10: We only describe what one has to change in order to use the Chris@10: multi-threaded routines. Chris@10: Chris@10: First, programs using the parallel complex transforms should be Chris@10: linked with `-lfftw3_threads -lfftw3 -lm' on Unix, or `-lfftw3_omp Chris@10: -lfftw3 -lm' if you compiled with OpenMP. You will also need to link Chris@10: with whatever library is responsible for threads on your system (e.g. Chris@10: `-lpthread' on GNU/Linux) or include whatever compiler flag enables Chris@10: OpenMP (e.g. `-fopenmp' with gcc). Chris@10: Chris@10: Second, before calling _any_ FFTW routines, you should call the Chris@10: function: Chris@10: Chris@10: int fftw_init_threads(void); Chris@10: Chris@10: This function, which need only be called once, performs any one-time Chris@10: initialization required to use threads on your system. It returns zero Chris@10: if there was some error (which should not happen under normal Chris@10: circumstances) and a non-zero value otherwise. Chris@10: Chris@10: Third, before creating a plan that you want to parallelize, you Chris@10: should call: Chris@10: Chris@10: void fftw_plan_with_nthreads(int nthreads); Chris@10: Chris@10: The `nthreads' argument indicates the number of threads you want Chris@10: FFTW to use (or actually, the maximum number). All plans subsequently Chris@10: created with any planner routine will use that many threads. You can Chris@10: call `fftw_plan_with_nthreads', create some plans, call Chris@10: `fftw_plan_with_nthreads' again with a different argument, and create Chris@10: some more plans for a new number of threads. Plans already created Chris@10: before a call to `fftw_plan_with_nthreads' are unaffected. If you pass Chris@10: an `nthreads' argument of `1' (the default), threads are disabled for Chris@10: subsequent plans. Chris@10: Chris@10: With OpenMP, to configure FFTW to use all of the currently running Chris@10: OpenMP threads (set by `omp_set_num_threads(nthreads)' or by the Chris@10: `OMP_NUM_THREADS' environment variable), you can do: Chris@10: `fftw_plan_with_nthreads(omp_get_max_threads())'. (The `omp_' OpenMP Chris@10: functions are declared via `#include '.) Chris@10: Chris@10: Given a plan, you then execute it as usual with Chris@10: `fftw_execute(plan)', and the execution will use the number of threads Chris@10: specified when the plan was created. When done, you destroy it as Chris@10: usual with `fftw_destroy_plan'. As described in *note Thread safety::, Chris@10: plan _execution_ is thread-safe, but plan creation and destruction are Chris@10: _not_: you should create/destroy plans only from a single thread, but Chris@10: can safely execute multiple plans in parallel. Chris@10: Chris@10: There is one additional routine: if you want to get rid of all memory Chris@10: and other resources allocated internally by FFTW, you can call: Chris@10: Chris@10: void fftw_cleanup_threads(void); Chris@10: Chris@10: which is much like the `fftw_cleanup()' function except that it also Chris@10: gets rid of threads-related data. You must _not_ execute any Chris@10: previously created plans after calling this function. Chris@10: Chris@10: We should also mention one other restriction: if you save wisdom Chris@10: from a program using the multi-threaded FFTW, that wisdom _cannot be Chris@10: used_ by a program using only the single-threaded FFTW (i.e. not calling Chris@10: `fftw_init_threads'). *Note Words of Wisdom-Saving Plans::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: How Many Threads to Use?, Next: Thread safety, Prev: Usage of Multi-threaded FFTW, Up: Multi-threaded FFTW Chris@10: Chris@10: 5.3 How Many Threads to Use? Chris@10: ============================ Chris@10: Chris@10: There is a fair amount of overhead involved in synchronizing threads, Chris@10: so the optimal number of threads to use depends upon the size of the Chris@10: transform as well as on the number of processors you have. Chris@10: Chris@10: As a general rule, you don't want to use more threads than you have Chris@10: processors. (Using more threads will work, but there will be extra Chris@10: overhead with no benefit.) In fact, if the problem size is too small, Chris@10: you may want to use fewer threads than you have processors. Chris@10: Chris@10: You will have to experiment with your system to see what level of Chris@10: parallelization is best for your problem size. Typically, the problem Chris@10: will have to involve at least a few thousand data points before threads Chris@10: become beneficial. If you plan with `FFTW_PATIENT', it will Chris@10: automatically disable threads for sizes that don't benefit from Chris@10: parallelization. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Thread safety, Prev: How Many Threads to Use?, Up: Multi-threaded FFTW Chris@10: Chris@10: 5.4 Thread safety Chris@10: ================= Chris@10: Chris@10: Users writing multi-threaded programs (including OpenMP) must concern Chris@10: themselves with the "thread safety" of the libraries they use--that is, Chris@10: whether it is safe to call routines in parallel from multiple threads. Chris@10: FFTW can be used in such an environment, but some care must be taken Chris@10: because the planner routines share data (e.g. wisdom and trigonometric Chris@10: tables) between calls and plans. Chris@10: Chris@10: The upshot is that the only thread-safe (re-entrant) routine in FFTW Chris@10: is `fftw_execute' (and the new-array variants thereof). All other Chris@10: routines (e.g. the planner) should only be called from one thread at a Chris@10: time. So, for example, you can wrap a semaphore lock around any calls Chris@10: to the planner; even more simply, you can just create all of your plans Chris@10: from one thread. We do not think this should be an important Chris@10: restriction (FFTW is designed for the situation where the only Chris@10: performance-sensitive code is the actual execution of the transform), Chris@10: and the benefits of shared data between plans are great. Chris@10: Chris@10: Note also that, since the plan is not modified by `fftw_execute', it Chris@10: is safe to execute the _same plan_ in parallel by multiple threads. Chris@10: However, since a given plan operates by default on a fixed array, you Chris@10: need to use one of the new-array execute functions (*note New-array Chris@10: Execute Functions::) so that different threads compute the transform of Chris@10: different data. Chris@10: Chris@10: (Users should note that these comments only apply to programs using Chris@10: shared-memory threads or OpenMP. Parallelism using MPI or forked Chris@10: processes involves a separate address-space and global variables for Chris@10: each process, and is not susceptible to problems of this sort.) Chris@10: Chris@10: If you are configured FFTW with the `--enable-debug' or Chris@10: `--enable-debug-malloc' flags (*note Installation on Unix::), then Chris@10: `fftw_execute' is not thread-safe. These flags are not documented Chris@10: because they are intended only for developing and debugging FFTW, but Chris@10: if you must use `--enable-debug' then you should also specifically pass Chris@10: `--disable-debug-malloc' for `fftw_execute' to be thread-safe. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Distributed-memory FFTW with MPI, Next: Calling FFTW from Modern Fortran, Prev: Multi-threaded FFTW, Up: Top Chris@10: Chris@10: 6 Distributed-memory FFTW with MPI Chris@10: ********************************** Chris@10: Chris@10: In this chapter we document the parallel FFTW routines for parallel Chris@10: systems supporting the MPI message-passing interface. Unlike the Chris@10: shared-memory threads described in the previous chapter, MPI allows you Chris@10: to use _distributed-memory_ parallelism, where each CPU has its own Chris@10: separate memory, and which can scale up to clusters of many thousands Chris@10: of processors. This capability comes at a price, however: each process Chris@10: only stores a _portion_ of the data to be transformed, which means that Chris@10: the data structures and programming-interface are quite different from Chris@10: the serial or threads versions of FFTW. Chris@10: Chris@10: Distributed-memory parallelism is especially useful when you are Chris@10: transforming arrays so large that they do not fit into the memory of a Chris@10: single processor. The storage per-process required by FFTW's MPI Chris@10: routines is proportional to the total array size divided by the number Chris@10: of processes. Conversely, distributed-memory parallelism can easily Chris@10: pose an unacceptably high communications overhead for small problems; Chris@10: the threshold problem size for which parallelism becomes advantageous Chris@10: will depend on the precise problem you are interested in, your Chris@10: hardware, and your MPI implementation. Chris@10: Chris@10: A note on terminology: in MPI, you divide the data among a set of Chris@10: "processes" which each run in their own memory address space. Chris@10: Generally, each process runs on a different physical processor, but Chris@10: this is not required. A set of processes in MPI is described by an Chris@10: opaque data structure called a "communicator," the most common of which Chris@10: is the predefined communicator `MPI_COMM_WORLD' which refers to _all_ Chris@10: processes. For more information on these and other concepts common to Chris@10: all MPI programs, we refer the reader to the documentation at the MPI Chris@10: home page (http://www.mcs.anl.gov/research/projects/mpi/). Chris@10: Chris@10: We assume in this chapter that the reader is familiar with the usage Chris@10: of the serial (uniprocessor) FFTW, and focus only on the concepts new Chris@10: to the MPI interface. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * FFTW MPI Installation:: Chris@10: * Linking and Initializing MPI FFTW:: Chris@10: * 2d MPI example:: Chris@10: * MPI Data Distribution:: Chris@10: * Multi-dimensional MPI DFTs of Real Data:: Chris@10: * Other Multi-dimensional Real-data MPI Transforms:: Chris@10: * FFTW MPI Transposes:: Chris@10: * FFTW MPI Wisdom:: Chris@10: * Avoiding MPI Deadlocks:: Chris@10: * FFTW MPI Performance Tips:: Chris@10: * Combining MPI and Threads:: Chris@10: * FFTW MPI Reference:: Chris@10: * FFTW MPI Fortran Interface:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW MPI Installation, Next: Linking and Initializing MPI FFTW, Prev: Distributed-memory FFTW with MPI, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.1 FFTW MPI Installation Chris@10: ========================= Chris@10: Chris@10: All of the FFTW MPI code is located in the `mpi' subdirectory of the Chris@10: FFTW package. On Unix systems, the FFTW MPI libraries and header files Chris@10: are automatically configured, compiled, and installed along with the Chris@10: uniprocessor FFTW libraries simply by including `--enable-mpi' in the Chris@10: flags to the `configure' script (*note Installation on Unix::). Chris@10: Chris@10: Any implementation of the MPI standard, version 1 or later, should Chris@10: work with FFTW. The `configure' script will attempt to automatically Chris@10: detect how to compile and link code using your MPI implementation. In Chris@10: some cases, especially if you have multiple different MPI Chris@10: implementations installed or have an unusual MPI software package, you Chris@10: may need to provide this information explicitly. Chris@10: Chris@10: Most commonly, one compiles MPI code by invoking a special compiler Chris@10: command, typically `mpicc' for C code. The `configure' script knows Chris@10: the most common names for this command, but you can specify the MPI Chris@10: compilation command explicitly by setting the `MPICC' variable, as in Chris@10: `./configure MPICC=mpicc ...'. Chris@10: Chris@10: If, instead of a special compiler command, you need to link a certain Chris@10: library, you can specify the link command via the `MPILIBS' variable, Chris@10: as in `./configure MPILIBS=-lmpi ...'. Note that if your MPI library Chris@10: is installed in a non-standard location (one the compiler does not know Chris@10: about by default), you may also have to specify the location of the Chris@10: library and header files via `LDFLAGS' and `CPPFLAGS' variables, Chris@10: respectively, as in `./configure LDFLAGS=-L/path/to/mpi/libs Chris@10: CPPFLAGS=-I/path/to/mpi/include ...'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Linking and Initializing MPI FFTW, Next: 2d MPI example, Prev: FFTW MPI Installation, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.2 Linking and Initializing MPI FFTW Chris@10: ===================================== Chris@10: Chris@10: Programs using the MPI FFTW routines should be linked with `-lfftw3_mpi Chris@10: -lfftw3 -lm' on Unix in double precision, `-lfftw3f_mpi -lfftw3f -lm' Chris@10: in single precision, and so on (*note Precision::). You will also need Chris@10: to link with whatever library is responsible for MPI on your system; in Chris@10: most MPI implementations, there is a special compiler alias named Chris@10: `mpicc' to compile and link MPI code. Chris@10: Chris@10: Before calling any FFTW routines except possibly `fftw_init_threads' Chris@10: (*note Combining MPI and Threads::), but after calling `MPI_Init', you Chris@10: should call the function: Chris@10: Chris@10: void fftw_mpi_init(void); Chris@10: Chris@10: If, at the end of your program, you want to get rid of all memory and Chris@10: other resources allocated internally by FFTW, for both the serial and Chris@10: MPI routines, you can call: Chris@10: Chris@10: void fftw_mpi_cleanup(void); Chris@10: Chris@10: which is much like the `fftw_cleanup()' function except that it also Chris@10: gets rid of FFTW's MPI-related data. You must _not_ execute any Chris@10: previously created plans after calling this function. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: 2d MPI example, Next: MPI Data Distribution, Prev: Linking and Initializing MPI FFTW, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.3 2d MPI example Chris@10: ================== Chris@10: Chris@10: Before we document the FFTW MPI interface in detail, we begin with a Chris@10: simple example outlining how one would perform a two-dimensional `N0' Chris@10: by `N1' complex DFT. Chris@10: Chris@10: #include Chris@10: Chris@10: int main(int argc, char **argv) Chris@10: { Chris@10: const ptrdiff_t N0 = ..., N1 = ...; Chris@10: fftw_plan plan; Chris@10: fftw_complex *data; Chris@10: ptrdiff_t alloc_local, local_n0, local_0_start, i, j; Chris@10: Chris@10: MPI_Init(&argc, &argv); Chris@10: fftw_mpi_init(); Chris@10: Chris@10: /* get local data size and allocate */ Chris@10: alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD, Chris@10: &local_n0, &local_0_start); Chris@10: data = fftw_alloc_complex(alloc_local); Chris@10: Chris@10: /* create plan for in-place forward DFT */ Chris@10: plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD, Chris@10: FFTW_FORWARD, FFTW_ESTIMATE); Chris@10: Chris@10: /* initialize data to some function my_function(x,y) */ Chris@10: for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j) Chris@10: data[i*N1 + j] = my_function(local_0_start + i, j); Chris@10: Chris@10: /* compute transforms, in-place, as many times as desired */ Chris@10: fftw_execute(plan); Chris@10: Chris@10: fftw_destroy_plan(plan); Chris@10: Chris@10: MPI_Finalize(); Chris@10: } Chris@10: Chris@10: As can be seen above, the MPI interface follows the same basic style Chris@10: of allocate/plan/execute/destroy as the serial FFTW routines. All of Chris@10: the MPI-specific routines are prefixed with `fftw_mpi_' instead of Chris@10: `fftw_'. There are a few important differences, however: Chris@10: Chris@10: First, we must call `fftw_mpi_init()' after calling `MPI_Init' Chris@10: (required in all MPI programs) and before calling any other `fftw_mpi_' Chris@10: routine. Chris@10: Chris@10: Second, when we create the plan with `fftw_mpi_plan_dft_2d', Chris@10: analogous to `fftw_plan_dft_2d', we pass an additional argument: the Chris@10: communicator, indicating which processes will participate in the Chris@10: transform (here `MPI_COMM_WORLD', indicating all processes). Whenever Chris@10: you create, execute, or destroy a plan for an MPI transform, you must Chris@10: call the corresponding FFTW routine on _all_ processes in the Chris@10: communicator for that transform. (That is, these are _collective_ Chris@10: calls.) Note that the plan for the MPI transform uses the standard Chris@10: `fftw_execute' and `fftw_destroy' routines (on the other hand, there Chris@10: are MPI-specific new-array execute functions documented below). Chris@10: Chris@10: Third, all of the FFTW MPI routines take `ptrdiff_t' arguments Chris@10: instead of `int' as for the serial FFTW. `ptrdiff_t' is a standard C Chris@10: integer type which is (at least) 32 bits wide on a 32-bit machine and Chris@10: 64 bits wide on a 64-bit machine. This is to make it easy to specify Chris@10: very large parallel transforms on a 64-bit machine. (You can specify Chris@10: 64-bit transform sizes in the serial FFTW, too, but only by using the Chris@10: `guru64' planner interface. *Note 64-bit Guru Interface::.) Chris@10: Chris@10: Fourth, and most importantly, you don't allocate the entire Chris@10: two-dimensional array on each process. Instead, you call Chris@10: `fftw_mpi_local_size_2d' to find out what _portion_ of the array Chris@10: resides on each processor, and how much space to allocate. Here, the Chris@10: portion of the array on each process is a `local_n0' by `N1' slice of Chris@10: the total array, starting at index `local_0_start'. The total number Chris@10: of `fftw_complex' numbers to allocate is given by the `alloc_local' Chris@10: return value, which _may_ be greater than `local_n0 * N1' (in case some Chris@10: intermediate calculations require additional storage). The data Chris@10: distribution in FFTW's MPI interface is described in more detail by the Chris@10: next section. Chris@10: Chris@10: Given the portion of the array that resides on the local process, it Chris@10: is straightforward to initialize the data (here to a function Chris@10: `myfunction') and otherwise manipulate it. Of course, at the end of Chris@10: the program you may want to output the data somehow, but synchronizing Chris@10: this output is up to you and is beyond the scope of this manual. (One Chris@10: good way to output a large multi-dimensional distributed array in MPI Chris@10: to a portable binary file is to use the free HDF5 library; see the HDF Chris@10: home page (http://www.hdfgroup.org/).) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: MPI Data Distribution, Next: Multi-dimensional MPI DFTs of Real Data, Prev: 2d MPI example, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.4 MPI Data Distribution Chris@10: ========================= Chris@10: Chris@10: The most important concept to understand in using FFTW's MPI interface Chris@10: is the data distribution. With a serial or multithreaded FFT, all of Chris@10: the inputs and outputs are stored as a single contiguous chunk of Chris@10: memory. With a distributed-memory FFT, the inputs and outputs are Chris@10: broken into disjoint blocks, one per process. Chris@10: Chris@10: In particular, FFTW uses a _1d block distribution_ of the data, Chris@10: distributed along the _first dimension_. For example, if you want to Chris@10: perform a 100 x 200 complex DFT, distributed over 4 processes, each Chris@10: process will get a 25 x 200 slice of the data. That is, process 0 Chris@10: will get rows 0 through 24, process 1 will get rows 25 through 49, Chris@10: process 2 will get rows 50 through 74, and process 3 will get rows 75 Chris@10: through 99. If you take the same array but distribute it over 3 Chris@10: processes, then it is not evenly divisible so the different processes Chris@10: will have unequal chunks. FFTW's default choice in this case is to Chris@10: assign 34 rows to processes 0 and 1, and 32 rows to process 2. Chris@10: Chris@10: FFTW provides several `fftw_mpi_local_size' routines that you can Chris@10: call to find out what portion of an array is stored on the current Chris@10: process. In most cases, you should use the default block sizes picked Chris@10: by FFTW, but it is also possible to specify your own block size. For Chris@10: example, with a 100 x 200 array on three processes, you can tell FFTW Chris@10: to use a block size of 40, which would assign 40 rows to processes 0 Chris@10: and 1, and 20 rows to process 2. FFTW's default is to divide the data Chris@10: equally among the processes if possible, and as best it can otherwise. Chris@10: The rows are always assigned in "rank order," i.e. process 0 gets the Chris@10: first block of rows, then process 1, and so on. (You can change this Chris@10: by using `MPI_Comm_split' to create a new communicator with re-ordered Chris@10: processes.) However, you should always call the `fftw_mpi_local_size' Chris@10: routines, if possible, rather than trying to predict FFTW's Chris@10: distribution choices. Chris@10: Chris@10: In particular, it is critical that you allocate the storage size that Chris@10: is returned by `fftw_mpi_local_size', which is _not_ necessarily the Chris@10: size of the local slice of the array. The reason is that intermediate Chris@10: steps of FFTW's algorithms involve transposing the array and Chris@10: redistributing the data, so at these intermediate steps FFTW may Chris@10: require more local storage space (albeit always proportional to the Chris@10: total size divided by the number of processes). The Chris@10: `fftw_mpi_local_size' functions know how much storage is required for Chris@10: these intermediate steps and tell you the correct amount to allocate. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Basic and advanced distribution interfaces:: Chris@10: * Load balancing:: Chris@10: * Transposed distributions:: Chris@10: * One-dimensional distributions:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Basic and advanced distribution interfaces, Next: Load balancing, Prev: MPI Data Distribution, Up: MPI Data Distribution Chris@10: Chris@10: 6.4.1 Basic and advanced distribution interfaces Chris@10: ------------------------------------------------ Chris@10: Chris@10: As with the planner interface, the `fftw_mpi_local_size' distribution Chris@10: interface is broken into basic and advanced (`_many') interfaces, where Chris@10: the latter allows you to specify the block size manually and also to Chris@10: request block sizes when computing multiple transforms simultaneously. Chris@10: These functions are documented more exhaustively by the FFTW MPI Chris@10: Reference, but we summarize the basic ideas here using a couple of Chris@10: two-dimensional examples. Chris@10: Chris@10: For the 100 x 200 complex-DFT example, above, we would find the Chris@10: distribution by calling the following function in the basic interface: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@10: Chris@10: Given the total size of the data to be transformed (here, `n0 = 100' Chris@10: and `n1 = 200') and an MPI communicator (`comm'), this function Chris@10: provides three numbers. Chris@10: Chris@10: First, it describes the shape of the local data: the current process Chris@10: should store a `local_n0' by `n1' slice of the overall dataset, in Chris@10: row-major order (`n1' dimension contiguous), starting at index Chris@10: `local_0_start'. That is, if the total dataset is viewed as a `n0' by Chris@10: `n1' matrix, the current process should store the rows `local_0_start' Chris@10: to `local_0_start+local_n0-1'. Obviously, if you are running with only Chris@10: a single MPI process, that process will store the entire array: Chris@10: `local_0_start' will be zero and `local_n0' will be `n0'. *Note Chris@10: Row-major Format::. Chris@10: Chris@10: Second, the return value is the total number of data elements (e.g., Chris@10: complex numbers for a complex DFT) that should be allocated for the Chris@10: input and output arrays on the current process (ideally with Chris@10: `fftw_malloc' or an `fftw_alloc' function, to ensure optimal Chris@10: alignment). It might seem that this should always be equal to Chris@10: `local_n0 * n1', but this is _not_ the case. FFTW's distributed FFT Chris@10: algorithms require data redistributions at intermediate stages of the Chris@10: transform, and in some circumstances this may require slightly larger Chris@10: local storage. This is discussed in more detail below, under *note Chris@10: Load balancing::. Chris@10: Chris@10: The advanced-interface `local_size' function for multidimensional Chris@10: transforms returns the same three things (`local_n0', `local_0_start', Chris@10: and the total number of elements to allocate), but takes more inputs: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, Chris@10: ptrdiff_t howmany, Chris@10: ptrdiff_t block0, Chris@10: MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, Chris@10: ptrdiff_t *local_0_start); Chris@10: Chris@10: The two-dimensional case above corresponds to `rnk = 2' and an array Chris@10: `n' of length 2 with `n[0] = n0' and `n[1] = n1'. This routine is for Chris@10: any `rnk > 1'; one-dimensional transforms have their own interface Chris@10: because they work slightly differently, as discussed below. Chris@10: Chris@10: First, the advanced interface allows you to perform multiple Chris@10: transforms at once, of interleaved data, as specified by the `howmany' Chris@10: parameter. (`hoamany' is 1 for a single transform.) Chris@10: Chris@10: Second, here you can specify your desired block size in the `n0' Chris@10: dimension, `block0'. To use FFTW's default block size, pass Chris@10: `FFTW_MPI_DEFAULT_BLOCK' (0) for `block0'. Otherwise, on `P' Chris@10: processes, FFTW will return `local_n0' equal to `block0' on the first Chris@10: `P / block0' processes (rounded down), return `local_n0' equal to `n0 - Chris@10: block0 * (P / block0)' on the next process, and `local_n0' equal to Chris@10: zero on any remaining processes. In general, we recommend using the Chris@10: default block size (which corresponds to `n0 / P', rounded up). Chris@10: Chris@10: For example, suppose you have `P = 4' processes and `n0 = 21'. The Chris@10: default will be a block size of `6', which will give `local_n0 = 6' on Chris@10: the first three processes and `local_n0 = 3' on the last process. Chris@10: Instead, however, you could specify `block0 = 5' if you wanted, which Chris@10: would give `local_n0 = 5' on processes 0 to 2, `local_n0 = 6' on Chris@10: process 3. (This choice, while it may look superficially more Chris@10: "balanced," has the same critical path as FFTW's default but requires Chris@10: more communications.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Load balancing, Next: Transposed distributions, Prev: Basic and advanced distribution interfaces, Up: MPI Data Distribution Chris@10: Chris@10: 6.4.2 Load balancing Chris@10: -------------------- Chris@10: Chris@10: Ideally, when you parallelize a transform over some P processes, each Chris@10: process should end up with work that takes equal time. Otherwise, all Chris@10: of the processes end up waiting on whichever process is slowest. This Chris@10: goal is known as "load balancing." In this section, we describe the Chris@10: circumstances under which FFTW is able to load-balance well, and in Chris@10: particular how you should choose your transform size in order to load Chris@10: balance. Chris@10: Chris@10: Load balancing is especially difficult when you are parallelizing Chris@10: over heterogeneous machines; for example, if one of your processors is a Chris@10: old 486 and another is a Pentium IV, obviously you should give the Chris@10: Pentium more work to do than the 486 since the latter is much slower. Chris@10: FFTW does not deal with this problem, however--it assumes that your Chris@10: processes run on hardware of comparable speed, and that the goal is Chris@10: therefore to divide the problem as equally as possible. Chris@10: Chris@10: For a multi-dimensional complex DFT, FFTW can divide the problem Chris@10: equally among the processes if: (i) the _first_ dimension `n0' is Chris@10: divisible by P; and (ii), the _product_ of the subsequent dimensions is Chris@10: divisible by P. (For the advanced interface, where you can specify Chris@10: multiple simultaneous transforms via some "vector" length `howmany', a Chris@10: factor of `howmany' is included in the product of the subsequent Chris@10: dimensions.) Chris@10: Chris@10: For a one-dimensional complex DFT, the length `N' of the data should Chris@10: be divisible by P _squared_ to be able to divide the problem equally Chris@10: among the processes. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Transposed distributions, Next: One-dimensional distributions, Prev: Load balancing, Up: MPI Data Distribution Chris@10: Chris@10: 6.4.3 Transposed distributions Chris@10: ------------------------------ Chris@10: Chris@10: Internally, FFTW's MPI transform algorithms work by first computing Chris@10: transforms of the data local to each process, then by globally Chris@10: _transposing_ the data in some fashion to redistribute the data among Chris@10: the processes, transforming the new data local to each process, and Chris@10: transposing back. For example, a two-dimensional `n0' by `n1' array, Chris@10: distributed across the `n0' dimension, is transformd by: (i) Chris@10: transforming the `n1' dimension, which are local to each process; (ii) Chris@10: transposing to an `n1' by `n0' array, distributed across the `n1' Chris@10: dimension; (iii) transforming the `n0' dimension, which is now local to Chris@10: each process; (iv) transposing back. Chris@10: Chris@10: However, in many applications it is acceptable to compute a Chris@10: multidimensional DFT whose results are produced in transposed order Chris@10: (e.g., `n1' by `n0' in two dimensions). This provides a significant Chris@10: performance advantage, because it means that the final transposition Chris@10: step can be omitted. FFTW supports this optimization, which you Chris@10: specify by passing the flag `FFTW_MPI_TRANSPOSED_OUT' to the planner Chris@10: routines. To compute the inverse transform of transposed output, you Chris@10: specify `FFTW_MPI_TRANSPOSED_IN' to tell it that the input is Chris@10: transposed. In this section, we explain how to interpret the output Chris@10: format of such a transform. Chris@10: Chris@10: Suppose you have are transforming multi-dimensional data with (at Chris@10: least two) dimensions n[0] x n[1] x n[2] x ... x n[d-1] . As always, Chris@10: it is distributed along the first dimension n[0] . Now, if we compute Chris@10: its DFT with the `FFTW_MPI_TRANSPOSED_OUT' flag, the resulting output Chris@10: data are stored with the first _two_ dimensions transposed: n[1] x n[0] Chris@10: x n[2] x ... x n[d-1] , distributed along the n[1] dimension. Chris@10: Conversely, if we take the n[1] x n[0] x n[2] x ... x n[d-1] data and Chris@10: transform it with the `FFTW_MPI_TRANSPOSED_IN' flag, then the format Chris@10: goes back to the original n[0] x n[1] x n[2] x ... x n[d-1] array. Chris@10: Chris@10: There are two ways to find the portion of the transposed array that Chris@10: resides on the current process. First, you can simply call the Chris@10: appropriate `local_size' function, passing n[1] x n[0] x n[2] x ... x Chris@10: n[d-1] (the transposed dimensions). This would mean calling the Chris@10: `local_size' function twice, once for the transposed and once for the Chris@10: non-transposed dimensions. Alternatively, you can call one of the Chris@10: `local_size_transposed' functions, which returns both the Chris@10: non-transposed and transposed data distribution from a single call. Chris@10: For example, for a 3d transform with transposed output (or input), you Chris@10: might call: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_3d_transposed( Chris@10: ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@10: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@10: Chris@10: Here, `local_n0' and `local_0_start' give the size and starting Chris@10: index of the `n0' dimension for the _non_-transposed data, as in the Chris@10: previous sections. For _transposed_ data (e.g. the output for Chris@10: `FFTW_MPI_TRANSPOSED_OUT'), `local_n1' and `local_1_start' give the Chris@10: size and starting index of the `n1' dimension, which is the first Chris@10: dimension of the transposed data (`n1' by `n0' by `n2'). Chris@10: Chris@10: (Note that `FFTW_MPI_TRANSPOSED_IN' is completely equivalent to Chris@10: performing `FFTW_MPI_TRANSPOSED_OUT' and passing the first two Chris@10: dimensions to the planner in reverse order, or vice versa. If you pass Chris@10: _both_ the `FFTW_MPI_TRANSPOSED_IN' and `FFTW_MPI_TRANSPOSED_OUT' Chris@10: flags, it is equivalent to swapping the first two dimensions passed to Chris@10: the planner and passing _neither_ flag.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: One-dimensional distributions, Prev: Transposed distributions, Up: MPI Data Distribution Chris@10: Chris@10: 6.4.4 One-dimensional distributions Chris@10: ----------------------------------- Chris@10: Chris@10: For one-dimensional distributed DFTs using FFTW, matters are slightly Chris@10: more complicated because the data distribution is more closely tied to Chris@10: how the algorithm works. In particular, you can no longer pass an Chris@10: arbitrary block size and must accept FFTW's default; also, the block Chris@10: sizes may be different for input and output. Also, the data Chris@10: distribution depends on the flags and transform direction, in order for Chris@10: forward and backward transforms to work correctly. Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_1d(ptrdiff_t n0, MPI_Comm comm, Chris@10: int sign, unsigned flags, Chris@10: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, Chris@10: ptrdiff_t *local_no, ptrdiff_t *local_o_start); Chris@10: Chris@10: This function computes the data distribution for a 1d transform of Chris@10: size `n0' with the given transform `sign' and `flags'. Both input and Chris@10: output data use block distributions. The input on the current process Chris@10: will consist of `local_ni' numbers starting at index `local_i_start'; Chris@10: e.g. if only a single process is used, then `local_ni' will be `n0' and Chris@10: `local_i_start' will be `0'. Similarly for the output, with `local_no' Chris@10: numbers starting at index `local_o_start'. The return value of Chris@10: `fftw_mpi_local_size_1d' will be the total number of elements to Chris@10: allocate on the current process (which might be slightly larger than Chris@10: the local size due to intermediate steps in the algorithm). Chris@10: Chris@10: As mentioned above (*note Load balancing::), the data will be divided Chris@10: equally among the processes if `n0' is divisible by the _square_ of the Chris@10: number of processes. In this case, `local_ni' will equal `local_no'. Chris@10: Otherwise, they may be different. Chris@10: Chris@10: For some applications, such as convolutions, the order of the output Chris@10: data is irrelevant. In this case, performance can be improved by Chris@10: specifying that the output data be stored in an FFTW-defined Chris@10: "scrambled" format. (In particular, this is the analogue of transposed Chris@10: output in the multidimensional case: scrambled output saves a Chris@10: communications step.) If you pass `FFTW_MPI_SCRAMBLED_OUT' in the Chris@10: flags, then the output is stored in this (undocumented) scrambled Chris@10: order. Conversely, to perform the inverse transform of data in Chris@10: scrambled order, pass the `FFTW_MPI_SCRAMBLED_IN' flag. Chris@10: Chris@10: In MPI FFTW, only composite sizes `n0' can be parallelized; we have Chris@10: not yet implemented a parallel algorithm for large prime sizes. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Multi-dimensional MPI DFTs of Real Data, Next: Other Multi-dimensional Real-data MPI Transforms, Prev: MPI Data Distribution, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.5 Multi-dimensional MPI DFTs of Real Data Chris@10: =========================================== Chris@10: Chris@10: FFTW's MPI interface also supports multi-dimensional DFTs of real data, Chris@10: similar to the serial r2c and c2r interfaces. (Parallel Chris@10: one-dimensional real-data DFTs are not currently supported; you must Chris@10: use a complex transform and set the imaginary parts of the inputs to Chris@10: zero.) Chris@10: Chris@10: The key points to understand for r2c and c2r MPI transforms (compared Chris@10: to the MPI complex DFTs or the serial r2c/c2r transforms), are: Chris@10: Chris@10: * Just as for serial transforms, r2c/c2r DFTs transform n[0] x n[1] Chris@10: x n[2] x ... x n[d-1] real data to/from n[0] x n[1] x n[2] x ... Chris@10: x (n[d-1]/2 + 1) complex data: the last dimension of the complex Chris@10: data is cut in half (rounded down), plus one. As for the serial Chris@10: transforms, the sizes you pass to the `plan_dft_r2c' and Chris@10: `plan_dft_c2r' are the n[0] x n[1] x n[2] x ... x n[d-1] Chris@10: dimensions of the real data. Chris@10: Chris@10: * Although the real data is _conceptually_ n[0] x n[1] x n[2] x ... Chris@10: x n[d-1] , it is _physically_ stored as an n[0] x n[1] x n[2] x Chris@10: ... x [2 (n[d-1]/2 + 1)] array, where the last dimension has been Chris@10: _padded_ to make it the same size as the complex output. This is Chris@10: much like the in-place serial r2c/c2r interface (*note Chris@10: Multi-Dimensional DFTs of Real Data::), except that in MPI the Chris@10: padding is required even for out-of-place data. The extra padding Chris@10: numbers are ignored by FFTW (they are _not_ like zero-padding the Chris@10: transform to a larger size); they are only used to determine the Chris@10: data layout. Chris@10: Chris@10: * The data distribution in MPI for _both_ the real and complex data Chris@10: is determined by the shape of the _complex_ data. That is, you Chris@10: call the appropriate `local size' function for the n[0] x n[1] x Chris@10: n[2] x ... x (n[d-1]/2 + 1) Chris@10: Chris@10: complex data, and then use the _same_ distribution for the real Chris@10: data except that the last complex dimension is replaced by a Chris@10: (padded) real dimension of twice the length. Chris@10: Chris@10: Chris@10: For example suppose we are performing an out-of-place r2c transform Chris@10: of L x M x N real data [padded to L x M x 2(N/2+1) ], resulting in L x Chris@10: M x N/2+1 complex data. Similar to the example in *note 2d MPI Chris@10: example::, we might do something like: Chris@10: Chris@10: #include Chris@10: Chris@10: int main(int argc, char **argv) Chris@10: { Chris@10: const ptrdiff_t L = ..., M = ..., N = ...; Chris@10: fftw_plan plan; Chris@10: double *rin; Chris@10: fftw_complex *cout; Chris@10: ptrdiff_t alloc_local, local_n0, local_0_start, i, j, k; Chris@10: Chris@10: MPI_Init(&argc, &argv); Chris@10: fftw_mpi_init(); Chris@10: Chris@10: /* get local data size and allocate */ Chris@10: alloc_local = fftw_mpi_local_size_3d(L, M, N/2+1, MPI_COMM_WORLD, Chris@10: &local_n0, &local_0_start); Chris@10: rin = fftw_alloc_real(2 * alloc_local); Chris@10: cout = fftw_alloc_complex(alloc_local); Chris@10: Chris@10: /* create plan for out-of-place r2c DFT */ Chris@10: plan = fftw_mpi_plan_dft_r2c_3d(L, M, N, rin, cout, MPI_COMM_WORLD, Chris@10: FFTW_MEASURE); Chris@10: Chris@10: /* initialize rin to some function my_func(x,y,z) */ Chris@10: for (i = 0; i < local_n0; ++i) Chris@10: for (j = 0; j < M; ++j) Chris@10: for (k = 0; k < N; ++k) Chris@10: rin[(i*M + j) * (2*(N/2+1)) + k] = my_func(local_0_start+i, j, k); Chris@10: Chris@10: /* compute transforms as many times as desired */ Chris@10: fftw_execute(plan); Chris@10: Chris@10: fftw_destroy_plan(plan); Chris@10: Chris@10: MPI_Finalize(); Chris@10: } Chris@10: Chris@10: Note that we allocated `rin' using `fftw_alloc_real' with an Chris@10: argument of `2 * alloc_local': since `alloc_local' is the number of Chris@10: _complex_ values to allocate, the number of _real_ values is twice as Chris@10: many. The `rin' array is then local_n0 x M x 2(N/2+1) in row-major Chris@10: order, so its `(i,j,k)' element is at the index `(i*M + j) * Chris@10: (2*(N/2+1)) + k' (*note Multi-dimensional Array Format::). Chris@10: Chris@10: As for the complex transforms, improved performance can be obtained Chris@10: by specifying that the output is the transpose of the input or vice Chris@10: versa (*note Transposed distributions::). In our L x M x N r2c Chris@10: example, including `FFTW_TRANSPOSED_OUT' in the flags means that the Chris@10: input would be a padded L x M x 2(N/2+1) real array distributed over Chris@10: the `L' dimension, while the output would be a M x L x N/2+1 complex Chris@10: array distributed over the `M' dimension. To perform the inverse c2r Chris@10: transform with the same data distributions, you would use the Chris@10: `FFTW_TRANSPOSED_IN' flag. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Other Multi-dimensional Real-data MPI Transforms, Next: FFTW MPI Transposes, Prev: Multi-dimensional MPI DFTs of Real Data, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.6 Other multi-dimensional Real-Data MPI Transforms Chris@10: ==================================================== Chris@10: Chris@10: FFTW's MPI interface also supports multi-dimensional `r2r' transforms Chris@10: of all kinds supported by the serial interface (e.g. discrete cosine Chris@10: and sine transforms, discrete Hartley transforms, etc.). Only Chris@10: multi-dimensional `r2r' transforms, not one-dimensional transforms, are Chris@10: currently parallelized. Chris@10: Chris@10: These are used much like the multidimensional complex DFTs discussed Chris@10: above, except that the data is real rather than complex, and one needs Chris@10: to pass an r2r transform kind (`fftw_r2r_kind') for each dimension as Chris@10: in the serial FFTW (*note More DFTs of Real Data::). Chris@10: Chris@10: For example, one might perform a two-dimensional L x M that is an Chris@10: REDFT10 (DCT-II) in the first dimension and an RODFT10 (DST-II) in the Chris@10: second dimension with code like: Chris@10: Chris@10: const ptrdiff_t L = ..., M = ...; Chris@10: fftw_plan plan; Chris@10: double *data; Chris@10: ptrdiff_t alloc_local, local_n0, local_0_start, i, j; Chris@10: Chris@10: /* get local data size and allocate */ Chris@10: alloc_local = fftw_mpi_local_size_2d(L, M, MPI_COMM_WORLD, Chris@10: &local_n0, &local_0_start); Chris@10: data = fftw_alloc_real(alloc_local); Chris@10: Chris@10: /* create plan for in-place REDFT10 x RODFT10 */ Chris@10: plan = fftw_mpi_plan_r2r_2d(L, M, data, data, MPI_COMM_WORLD, Chris@10: FFTW_REDFT10, FFTW_RODFT10, FFTW_MEASURE); Chris@10: Chris@10: /* initialize data to some function my_function(x,y) */ Chris@10: for (i = 0; i < local_n0; ++i) for (j = 0; j < M; ++j) Chris@10: data[i*M + j] = my_function(local_0_start + i, j); Chris@10: Chris@10: /* compute transforms, in-place, as many times as desired */ Chris@10: fftw_execute(plan); Chris@10: Chris@10: fftw_destroy_plan(plan); Chris@10: Chris@10: Notice that we use the same `local_size' functions as we did for Chris@10: complex data, only now we interpret the sizes in terms of real rather Chris@10: than complex values, and correspondingly use `fftw_alloc_real'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW MPI Transposes, Next: FFTW MPI Wisdom, Prev: Other Multi-dimensional Real-data MPI Transforms, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.7 FFTW MPI Transposes Chris@10: ======================= Chris@10: Chris@10: The FFTW's MPI Fourier transforms rely on one or more _global Chris@10: transposition_ step for their communications. For example, the Chris@10: multidimensional transforms work by transforming along some dimensions, Chris@10: then transposing to make the first dimension local and transforming Chris@10: that, then transposing back. Because global transposition of a Chris@10: block-distributed matrix has many other potential uses besides FFTs, Chris@10: FFTW's transpose routines can be called directly, as documented in this Chris@10: section. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Basic distributed-transpose interface:: Chris@10: * Advanced distributed-transpose interface:: Chris@10: * An improved replacement for MPI_Alltoall:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Basic distributed-transpose interface, Next: Advanced distributed-transpose interface, Prev: FFTW MPI Transposes, Up: FFTW MPI Transposes Chris@10: Chris@10: 6.7.1 Basic distributed-transpose interface Chris@10: ------------------------------------------- Chris@10: Chris@10: In particular, suppose that we have an `n0' by `n1' array in row-major Chris@10: order, block-distributed across the `n0' dimension. To transpose this Chris@10: into an `n1' by `n0' array block-distributed across the `n1' dimension, Chris@10: we would create a plan by calling the following function: Chris@10: Chris@10: fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1, Chris@10: double *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: Chris@10: The input and output arrays (`in' and `out') can be the same. The Chris@10: transpose is actually executed by calling `fftw_execute' on the plan, Chris@10: as usual. Chris@10: Chris@10: The `flags' are the usual FFTW planner flags, but support two Chris@10: additional flags: `FFTW_MPI_TRANSPOSED_OUT' and/or Chris@10: `FFTW_MPI_TRANSPOSED_IN'. What these flags indicate, for transpose Chris@10: plans, is that the output and/or input, respectively, are _locally_ Chris@10: transposed. That is, on each process input data is normally stored as Chris@10: a `local_n0' by `n1' array in row-major order, but for an Chris@10: `FFTW_MPI_TRANSPOSED_IN' plan the input data is stored as `n1' by Chris@10: `local_n0' in row-major order. Similarly, `FFTW_MPI_TRANSPOSED_OUT' Chris@10: means that the output is `n0' by `local_n1' instead of `local_n1' by Chris@10: `n0'. Chris@10: Chris@10: To determine the local size of the array on each process before and Chris@10: after the transpose, as well as the amount of storage that must be Chris@10: allocated, one should call `fftw_mpi_local_size_2d_transposed', just as Chris@10: for a 2d DFT as described in the previous section: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_2d_transposed Chris@10: (ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@10: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@10: Chris@10: Again, the return value is the local storage to allocate, which in Chris@10: this case is the number of _real_ (`double') values rather than complex Chris@10: numbers as in the previous examples. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Advanced distributed-transpose interface, Next: An improved replacement for MPI_Alltoall, Prev: Basic distributed-transpose interface, Up: FFTW MPI Transposes Chris@10: Chris@10: 6.7.2 Advanced distributed-transpose interface Chris@10: ---------------------------------------------- Chris@10: Chris@10: The above routines are for a transpose of a matrix of numbers (of type Chris@10: `double'), using FFTW's default block sizes. More generally, one can Chris@10: perform transposes of _tuples_ of numbers, with user-specified block Chris@10: sizes for the input and output: Chris@10: Chris@10: fftw_plan fftw_mpi_plan_many_transpose Chris@10: (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany, Chris@10: ptrdiff_t block0, ptrdiff_t block1, Chris@10: double *in, double *out, MPI_Comm comm, unsigned flags); Chris@10: Chris@10: In this case, one is transposing an `n0' by `n1' matrix of Chris@10: `howmany'-tuples (e.g. `howmany = 2' for complex numbers). The input Chris@10: is distributed along the `n0' dimension with block size `block0', and Chris@10: the `n1' by `n0' output is distributed along the `n1' dimension with Chris@10: block size `block1'. If `FFTW_MPI_DEFAULT_BLOCK' (0) is passed for a Chris@10: block size then FFTW uses its default block size. To get the local Chris@10: size of the data on each process, you should then call Chris@10: `fftw_mpi_local_size_many_transposed'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: An improved replacement for MPI_Alltoall, Prev: Advanced distributed-transpose interface, Up: FFTW MPI Transposes Chris@10: Chris@10: 6.7.3 An improved replacement for MPI_Alltoall Chris@10: ---------------------------------------------- Chris@10: Chris@10: We close this section by noting that FFTW's MPI transpose routines can Chris@10: be thought of as a generalization for the `MPI_Alltoall' function Chris@10: (albeit only for floating-point types), and in some circumstances can Chris@10: function as an improved replacement. Chris@10: Chris@10: `MPI_Alltoall' is defined by the MPI standard as: Chris@10: Chris@10: int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype, Chris@10: void *recvbuf, int recvcnt, MPI_Datatype recvtype, Chris@10: MPI_Comm comm); Chris@10: Chris@10: In particular, for `double*' arrays `in' and `out', consider the Chris@10: call: Chris@10: Chris@10: MPI_Alltoall(in, howmany, MPI_DOUBLE, out, howmany MPI_DOUBLE, comm); Chris@10: Chris@10: This is completely equivalent to: Chris@10: Chris@10: MPI_Comm_size(comm, &P); Chris@10: plan = fftw_mpi_plan_many_transpose(P, P, howmany, 1, 1, in, out, comm, FFTW_ESTIMATE); Chris@10: fftw_execute(plan); Chris@10: fftw_destroy_plan(plan); Chris@10: Chris@10: That is, computing a P x P transpose on `P' processes, with a block Chris@10: size of 1, is just a standard all-to-all communication. Chris@10: Chris@10: However, using the FFTW routine instead of `MPI_Alltoall' may have Chris@10: certain advantages. First of all, FFTW's routine can operate in-place Chris@10: (`in == out') whereas `MPI_Alltoall' can only operate out-of-place. Chris@10: Chris@10: Second, even for out-of-place plans, FFTW's routine may be faster, Chris@10: especially if you need to perform the all-to-all communication many Chris@10: times and can afford to use `FFTW_MEASURE' or `FFTW_PATIENT'. It Chris@10: should certainly be no slower, not including the time to create the Chris@10: plan, since one of the possible algorithms that FFTW uses for an Chris@10: out-of-place transpose _is_ simply to call `MPI_Alltoall'. However, Chris@10: FFTW also considers several other possible algorithms that, depending Chris@10: on your MPI implementation and your hardware, may be faster. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW MPI Wisdom, Next: Avoiding MPI Deadlocks, Prev: FFTW MPI Transposes, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.8 FFTW MPI Wisdom Chris@10: =================== Chris@10: Chris@10: FFTW's "wisdom" facility (*note Words of Wisdom-Saving Plans::) can be Chris@10: used to save MPI plans as well as to save uniprocessor plans. However, Chris@10: for MPI there are several unavoidable complications. Chris@10: Chris@10: First, the MPI standard does not guarantee that every process can Chris@10: perform file I/O (at least, not using C stdio routines)--in general, we Chris@10: may only assume that process 0 is capable of I/O.(1) So, if we want to Chris@10: export the wisdom from a single process to a file, we must first export Chris@10: the wisdom to a string, then send it to process 0, then write it to a Chris@10: file. Chris@10: Chris@10: Second, in principle we may want to have separate wisdom for every Chris@10: process, since in general the processes may run on different hardware Chris@10: even for a single MPI program. However, in practice FFTW's MPI code is Chris@10: designed for the case of homogeneous hardware (*note Load balancing::), Chris@10: and in this case it is convenient to use the same wisdom for every Chris@10: process. Thus, we need a mechanism to synchronize the wisdom. Chris@10: Chris@10: To address both of these problems, FFTW provides the following two Chris@10: functions: Chris@10: Chris@10: void fftw_mpi_broadcast_wisdom(MPI_Comm comm); Chris@10: void fftw_mpi_gather_wisdom(MPI_Comm comm); Chris@10: Chris@10: Given a communicator `comm', `fftw_mpi_broadcast_wisdom' will Chris@10: broadcast the wisdom from process 0 to all other processes. Chris@10: Conversely, `fftw_mpi_gather_wisdom' will collect wisdom from all Chris@10: processes onto process 0. (If the plans created for the same problem Chris@10: by different processes are not the same, `fftw_mpi_gather_wisdom' will Chris@10: arbitrarily choose one of the plans.) Both of these functions may Chris@10: result in suboptimal plans for different processes if the processes are Chris@10: running on non-identical hardware. Both of these functions are Chris@10: _collective_ calls, which means that they must be executed by all Chris@10: processes in the communicator. Chris@10: Chris@10: So, for example, a typical code snippet to import wisdom from a file Chris@10: and use it on all processes would be: Chris@10: Chris@10: { Chris@10: int rank; Chris@10: Chris@10: fftw_mpi_init(); Chris@10: MPI_Comm_rank(MPI_COMM_WORLD, &rank); Chris@10: if (rank == 0) fftw_import_wisdom_from_filename("mywisdom"); Chris@10: fftw_mpi_broadcast_wisdom(MPI_COMM_WORLD); Chris@10: } Chris@10: Chris@10: (Note that we must call `fftw_mpi_init' before importing any wisdom Chris@10: that might contain MPI plans.) Similarly, a typical code snippet to Chris@10: export wisdom from all processes to a file is: Chris@10: Chris@10: { Chris@10: int rank; Chris@10: Chris@10: fftw_mpi_gather_wisdom(MPI_COMM_WORLD); Chris@10: MPI_Comm_rank(MPI_COMM_WORLD, &rank); Chris@10: if (rank == 0) fftw_export_wisdom_to_filename("mywisdom"); Chris@10: } Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) In fact, even this assumption is not technically guaranteed by Chris@10: the standard, although it seems to be universal in actual MPI Chris@10: implementations and is widely assumed by MPI-using software. Chris@10: Technically, you need to query the `MPI_IO' attribute of Chris@10: `MPI_COMM_WORLD' with `MPI_Attr_get'. If this attribute is Chris@10: `MPI_PROC_NULL', no I/O is possible. If it is `MPI_ANY_SOURCE', any Chris@10: process can perform I/O. Otherwise, it is the rank of a process that Chris@10: can perform I/O ... but since it is not guaranteed to yield the _same_ Chris@10: rank on all processes, you have to do an `MPI_Allreduce' of some kind Chris@10: if you want all processes to agree about which is going to do I/O. And Chris@10: even then, the standard only guarantees that this process can perform Chris@10: output, but not input. See e.g. `Parallel Programming with MPI' by P. Chris@10: S. Pacheco, section 8.1.3. Needless to say, in our experience Chris@10: virtually no MPI programmers worry about this. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Avoiding MPI Deadlocks, Next: FFTW MPI Performance Tips, Prev: FFTW MPI Wisdom, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.9 Avoiding MPI Deadlocks Chris@10: ========================== Chris@10: Chris@10: An MPI program can _deadlock_ if one process is waiting for a message Chris@10: from another process that never gets sent. To avoid deadlocks when Chris@10: using FFTW's MPI routines, it is important to know which functions are Chris@10: _collective_: that is, which functions must _always_ be called in the Chris@10: _same order_ from _every_ process in a given communicator. (For Chris@10: example, `MPI_Barrier' is the canonical example of a collective Chris@10: function in the MPI standard.) Chris@10: Chris@10: The functions in FFTW that are _always_ collective are: every Chris@10: function beginning with `fftw_mpi_plan', as well as Chris@10: `fftw_mpi_broadcast_wisdom' and `fftw_mpi_gather_wisdom'. Also, the Chris@10: following functions from the ordinary FFTW interface are collective Chris@10: when they are applied to a plan created by an `fftw_mpi_plan' function: Chris@10: `fftw_execute', `fftw_destroy_plan', and `fftw_flops'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW MPI Performance Tips, Next: Combining MPI and Threads, Prev: Avoiding MPI Deadlocks, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.10 FFTW MPI Performance Tips Chris@10: ============================== Chris@10: Chris@10: In this section, we collect a few tips on getting the best performance Chris@10: out of FFTW's MPI transforms. Chris@10: Chris@10: First, because of the 1d block distribution, FFTW's parallelization Chris@10: is currently limited by the size of the first dimension. Chris@10: (Multidimensional block distributions may be supported by a future Chris@10: version.) More generally, you should ideally arrange the dimensions so Chris@10: that FFTW can divide them equally among the processes. *Note Load Chris@10: balancing::. Chris@10: Chris@10: Second, if it is not too inconvenient, you should consider working Chris@10: with transposed output for multidimensional plans, as this saves a Chris@10: considerable amount of communications. *Note Transposed Chris@10: distributions::. Chris@10: Chris@10: Third, the fastest choices are generally either an in-place transform Chris@10: or an out-of-place transform with the `FFTW_DESTROY_INPUT' flag (which Chris@10: allows the input array to be used as scratch space). In-place is Chris@10: especially beneficial if the amount of data per process is large. Chris@10: Chris@10: Fourth, if you have multiple arrays to transform at once, rather than Chris@10: calling FFTW's MPI transforms several times it usually seems to be Chris@10: faster to interleave the data and use the advanced interface. (This Chris@10: groups the communications together instead of requiring separate Chris@10: messages for each transform.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Combining MPI and Threads, Next: FFTW MPI Reference, Prev: FFTW MPI Performance Tips, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.11 Combining MPI and Threads Chris@10: ============================== Chris@10: Chris@10: In certain cases, it may be advantageous to combine MPI Chris@10: (distributed-memory) and threads (shared-memory) parallelization. FFTW Chris@10: supports this, with certain caveats. For example, if you have a Chris@10: cluster of 4-processor shared-memory nodes, you may want to use threads Chris@10: within the nodes and MPI between the nodes, instead of MPI for all Chris@10: parallelization. Chris@10: Chris@10: In particular, it is possible to seamlessly combine the MPI FFTW Chris@10: routines with the multi-threaded FFTW routines (*note Multi-threaded Chris@10: FFTW::). However, some care must be taken in the initialization code, Chris@10: which should look something like this: Chris@10: Chris@10: int threads_ok; Chris@10: Chris@10: int main(int argc, char **argv) Chris@10: { Chris@10: int provided; Chris@10: MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided); Chris@10: threads_ok = provided >= MPI_THREAD_FUNNELED; Chris@10: Chris@10: if (threads_ok) threads_ok = fftw_init_threads(); Chris@10: fftw_mpi_init(); Chris@10: Chris@10: ... Chris@10: if (threads_ok) fftw_plan_with_nthreads(...); Chris@10: ... Chris@10: Chris@10: MPI_Finalize(); Chris@10: } Chris@10: Chris@10: First, note that instead of calling `MPI_Init', you should call Chris@10: `MPI_Init_threads', which is the initialization routine defined by the Chris@10: MPI-2 standard to indicate to MPI that your program will be Chris@10: multithreaded. We pass `MPI_THREAD_FUNNELED', which indicates that we Chris@10: will only call MPI routines from the main thread. (FFTW will launch Chris@10: additional threads internally, but the extra threads will not call MPI Chris@10: code.) (You may also pass `MPI_THREAD_SERIALIZED' or Chris@10: `MPI_THREAD_MULTIPLE', which requests additional multithreading support Chris@10: from the MPI implementation, but this is not required by FFTW.) The Chris@10: `provided' parameter returns what level of threads support is actually Chris@10: supported by your MPI implementation; this _must_ be at least Chris@10: `MPI_THREAD_FUNNELED' if you want to call the FFTW threads routines, so Chris@10: we define a global variable `threads_ok' to record this. You should Chris@10: only call `fftw_init_threads' or `fftw_plan_with_nthreads' if Chris@10: `threads_ok' is true. For more information on thread safety in MPI, Chris@10: see the MPI and Threads Chris@10: (http://www.mpi-forum.org/docs/mpi-20-html/node162.htm) section of the Chris@10: MPI-2 standard. Chris@10: Chris@10: Second, we must call `fftw_init_threads' _before_ `fftw_mpi_init'. Chris@10: This is critical for technical reasons having to do with how FFTW Chris@10: initializes its list of algorithms. Chris@10: Chris@10: Then, if you call `fftw_plan_with_nthreads(N)', _every_ MPI process Chris@10: will launch (up to) `N' threads to parallelize its transforms. Chris@10: Chris@10: For example, in the hypothetical cluster of 4-processor nodes, you Chris@10: might wish to launch only a single MPI process per node, and then call Chris@10: `fftw_plan_with_nthreads(4)' on each process to use all processors in Chris@10: the nodes. Chris@10: Chris@10: This may or may not be faster than simply using as many MPI processes Chris@10: as you have processors, however. On the one hand, using threads within Chris@10: a node eliminates the need for explicit message passing within the Chris@10: node. On the other hand, FFTW's transpose routines are not Chris@10: multi-threaded, and this means that the communications that do take Chris@10: place will not benefit from parallelization within the node. Moreover, Chris@10: many MPI implementations already have optimizations to exploit shared Chris@10: memory when it is available, so adding the multithreaded FFTW on top of Chris@10: this may be superfluous. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW MPI Reference, Next: FFTW MPI Fortran Interface, Prev: Combining MPI and Threads, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.12 FFTW MPI Reference Chris@10: ======================= Chris@10: Chris@10: This chapter provides a complete reference to all FFTW MPI functions, Chris@10: datatypes, and constants. See also *note FFTW Reference:: for Chris@10: information on functions and types in common with the serial interface. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * MPI Files and Data Types:: Chris@10: * MPI Initialization:: Chris@10: * Using MPI Plans:: Chris@10: * MPI Data Distribution Functions:: Chris@10: * MPI Plan Creation:: Chris@10: * MPI Wisdom Communication:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: MPI Files and Data Types, Next: MPI Initialization, Prev: FFTW MPI Reference, Up: FFTW MPI Reference Chris@10: Chris@10: 6.12.1 MPI Files and Data Types Chris@10: ------------------------------- Chris@10: Chris@10: All programs using FFTW's MPI support should include its header file: Chris@10: Chris@10: #include Chris@10: Chris@10: Note that this header file includes the serial-FFTW `fftw3.h' header Chris@10: file, and also the `mpi.h' header file for MPI, so you need not include Chris@10: those files separately. Chris@10: Chris@10: You must also link to _both_ the FFTW MPI library and to the serial Chris@10: FFTW library. On Unix, this means adding `-lfftw3_mpi -lfftw3 -lm' at Chris@10: the end of the link command. Chris@10: Chris@10: Different precisions are handled as in the serial interface: *Note Chris@10: Precision::. That is, `fftw_' functions become `fftwf_' (in single Chris@10: precision) etcetera, and the libraries become `-lfftw3f_mpi -lfftw3f Chris@10: -lm' etcetera on Unix. Long-double precision is supported in MPI, but Chris@10: quad precision (`fftwq_') is not due to the lack of MPI support for Chris@10: this type. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: MPI Initialization, Next: Using MPI Plans, Prev: MPI Files and Data Types, Up: FFTW MPI Reference Chris@10: Chris@10: 6.12.2 MPI Initialization Chris@10: ------------------------- Chris@10: Chris@10: Before calling any other FFTW MPI (`fftw_mpi_') function, and before Chris@10: importing any wisdom for MPI problems, you must call: Chris@10: Chris@10: void fftw_mpi_init(void); Chris@10: Chris@10: If FFTW threads support is used, however, `fftw_mpi_init' should be Chris@10: called _after_ `fftw_init_threads' (*note Combining MPI and Threads::). Chris@10: Calling `fftw_mpi_init' additional times (before `fftw_mpi_cleanup') Chris@10: has no effect. Chris@10: Chris@10: If you want to deallocate all persistent data and reset FFTW to the Chris@10: pristine state it was in when you started your program, you can call: Chris@10: Chris@10: void fftw_mpi_cleanup(void); Chris@10: Chris@10: (This calls `fftw_cleanup', so you need not call the serial cleanup Chris@10: routine too, although it is safe to do so.) After calling Chris@10: `fftw_mpi_cleanup', all existing plans become undefined, and you should Chris@10: not attempt to execute or destroy them. You must call `fftw_mpi_init' Chris@10: again after `fftw_mpi_cleanup' if you want to resume using the MPI FFTW Chris@10: routines. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Using MPI Plans, Next: MPI Data Distribution Functions, Prev: MPI Initialization, Up: FFTW MPI Reference Chris@10: Chris@10: 6.12.3 Using MPI Plans Chris@10: ---------------------- Chris@10: Chris@10: Once an MPI plan is created, you can execute and destroy it using Chris@10: `fftw_execute', `fftw_destroy_plan', and the other functions in the Chris@10: serial interface that operate on generic plans (*note Using Plans::). Chris@10: Chris@10: The `fftw_execute' and `fftw_destroy_plan' functions, applied to MPI Chris@10: plans, are _collective_ calls: they must be called for all processes in Chris@10: the communicator that was used to create the plan. Chris@10: Chris@10: You must _not_ use the serial new-array plan-execution functions Chris@10: `fftw_execute_dft' and so on (*note New-array Execute Functions::) with Chris@10: MPI plans. Such functions are specialized to the problem type, and Chris@10: there are specific new-array execute functions for MPI plans: Chris@10: Chris@10: void fftw_mpi_execute_dft(fftw_plan p, fftw_complex *in, fftw_complex *out); Chris@10: void fftw_mpi_execute_dft_r2c(fftw_plan p, double *in, fftw_complex *out); Chris@10: void fftw_mpi_execute_dft_c2r(fftw_plan p, fftw_complex *in, double *out); Chris@10: void fftw_mpi_execute_r2r(fftw_plan p, double *in, double *out); Chris@10: Chris@10: These functions have the same restrictions as those of the serial Chris@10: new-array execute functions. They are _always_ safe to apply to the Chris@10: _same_ `in' and `out' arrays that were used to create the plan. They Chris@10: can only be applied to new arrarys if those arrays have the same types, Chris@10: dimensions, in-placeness, and alignment as the original arrays, where Chris@10: the best way to ensure the same alignment is to use FFTW's Chris@10: `fftw_malloc' and related allocation functions for all arrays (*note Chris@10: Memory Allocation::). Note that distributed transposes (*note FFTW MPI Chris@10: Transposes::) use `fftw_mpi_execute_r2r', since they count as rank-zero Chris@10: r2r plans from FFTW's perspective. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: MPI Data Distribution Functions, Next: MPI Plan Creation, Prev: Using MPI Plans, Up: FFTW MPI Reference Chris@10: Chris@10: 6.12.4 MPI Data Distribution Functions Chris@10: -------------------------------------- Chris@10: Chris@10: As described above (*note MPI Data Distribution::), in order to Chris@10: allocate your arrays, _before_ creating a plan, you must first call one Chris@10: of the following routines to determine the required allocation size and Chris@10: the portion of the array locally stored on a given process. The Chris@10: `MPI_Comm' communicator passed here must be equivalent to the Chris@10: communicator used below for plan creation. Chris@10: Chris@10: The basic interface for multidimensional transforms consists of the Chris@10: functions: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@10: ptrdiff_t fftw_mpi_local_size_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@10: MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@10: ptrdiff_t fftw_mpi_local_size(int rnk, const ptrdiff_t *n, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_2d_transposed(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@10: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@10: ptrdiff_t fftw_mpi_local_size_3d_transposed(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@10: MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@10: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@10: ptrdiff_t fftw_mpi_local_size_transposed(int rnk, const ptrdiff_t *n, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@10: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@10: Chris@10: These functions return the number of elements to allocate (complex Chris@10: numbers for DFT/r2c/c2r plans, real numbers for r2r plans), whereas the Chris@10: `local_n0' and `local_0_start' return the portion (`local_0_start' to Chris@10: `local_0_start + local_n0 - 1') of the first dimension of an n[0] x Chris@10: n[1] x n[2] x ... x n[d-1] array that is stored on the local process. Chris@10: *Note Basic and advanced distribution interfaces::. For Chris@10: `FFTW_MPI_TRANSPOSED_OUT' plans, the `_transposed' variants are useful Chris@10: in order to also return the local portion of the first dimension in the Chris@10: n[1] x n[0] x n[2] x ... x n[d-1] transposed output. *Note Transposed Chris@10: distributions::. The advanced interface for multidimensional Chris@10: transforms is: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@10: ptrdiff_t block0, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start); Chris@10: ptrdiff_t fftw_mpi_local_size_many_transposed(int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@10: ptrdiff_t block0, ptrdiff_t block1, MPI_Comm comm, Chris@10: ptrdiff_t *local_n0, ptrdiff_t *local_0_start, Chris@10: ptrdiff_t *local_n1, ptrdiff_t *local_1_start); Chris@10: Chris@10: These differ from the basic interface in only two ways. First, they Chris@10: allow you to specify block sizes `block0' and `block1' (the latter for Chris@10: the transposed output); you can pass `FFTW_MPI_DEFAULT_BLOCK' to use Chris@10: FFTW's default block size as in the basic interface. Second, you can Chris@10: pass a `howmany' parameter, corresponding to the advanced planning Chris@10: interface below: this is for transforms of contiguous `howmany'-tuples Chris@10: of numbers (`howmany = 1' in the basic interface). Chris@10: Chris@10: The corresponding basic and advanced routines for one-dimensional Chris@10: transforms (currently only complex DFTs) are: Chris@10: Chris@10: ptrdiff_t fftw_mpi_local_size_1d( Chris@10: ptrdiff_t n0, MPI_Comm comm, int sign, unsigned flags, Chris@10: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, Chris@10: ptrdiff_t *local_no, ptrdiff_t *local_o_start); Chris@10: ptrdiff_t fftw_mpi_local_size_many_1d( Chris@10: ptrdiff_t n0, ptrdiff_t howmany, Chris@10: MPI_Comm comm, int sign, unsigned flags, Chris@10: ptrdiff_t *local_ni, ptrdiff_t *local_i_start, Chris@10: ptrdiff_t *local_no, ptrdiff_t *local_o_start); Chris@10: Chris@10: As above, the return value is the number of elements to allocate Chris@10: (complex numbers, for complex DFTs). The `local_ni' and Chris@10: `local_i_start' arguments return the portion (`local_i_start' to Chris@10: `local_i_start + local_ni - 1') of the 1d array that is stored on this Chris@10: process for the transform _input_, and `local_no' and `local_o_start' Chris@10: are the corresponding quantities for the input. The `sign' Chris@10: (`FFTW_FORWARD' or `FFTW_BACKWARD') and `flags' must match the Chris@10: arguments passed when creating a plan. Although the inputs and outputs Chris@10: have different data distributions in general, it is guaranteed that the Chris@10: _output_ data distribution of an `FFTW_FORWARD' plan will match the Chris@10: _input_ data distribution of an `FFTW_BACKWARD' plan and vice versa; Chris@10: similarly for the `FFTW_MPI_SCRAMBLED_OUT' and `FFTW_MPI_SCRAMBLED_IN' Chris@10: flags. *Note One-dimensional distributions::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: MPI Plan Creation, Next: MPI Wisdom Communication, Prev: MPI Data Distribution Functions, Up: FFTW MPI Reference Chris@10: Chris@10: 6.12.5 MPI Plan Creation Chris@10: ------------------------ Chris@10: Chris@10: Complex-data MPI DFTs Chris@10: ..................... Chris@10: Chris@10: Plans for complex-data DFTs (*note 2d MPI example::) are created by: Chris@10: Chris@10: fftw_plan fftw_mpi_plan_dft_1d(ptrdiff_t n0, fftw_complex *in, fftw_complex *out, Chris@10: MPI_Comm comm, int sign, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: MPI_Comm comm, int sign, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: MPI_Comm comm, int sign, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft(int rnk, const ptrdiff_t *n, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: MPI_Comm comm, int sign, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_many_dft(int rnk, const ptrdiff_t *n, Chris@10: ptrdiff_t howmany, ptrdiff_t block, ptrdiff_t tblock, Chris@10: fftw_complex *in, fftw_complex *out, Chris@10: MPI_Comm comm, int sign, unsigned flags); Chris@10: Chris@10: These are similar to their serial counterparts (*note Complex DFTs::) Chris@10: in specifying the dimensions, sign, and flags of the transform. The Chris@10: `comm' argument gives an MPI communicator that specifies the set of Chris@10: processes to participate in the transform; plan creation is a Chris@10: collective function that must be called for all processes in the Chris@10: communicator. The `in' and `out' pointers refer only to a portion of Chris@10: the overall transform data (*note MPI Data Distribution::) as specified Chris@10: by the `local_size' functions in the previous section. Unless `flags' Chris@10: contains `FFTW_ESTIMATE', these arrays are overwritten during plan Chris@10: creation as for the serial interface. For multi-dimensional Chris@10: transforms, any dimensions `> 1' are supported; for one-dimensional Chris@10: transforms, only composite (non-prime) `n0' are currently supported Chris@10: (unlike the serial FFTW). Requesting an unsupported transform size Chris@10: will yield a `NULL' plan. (As in the serial interface, highly Chris@10: composite sizes generally yield the best performance.) Chris@10: Chris@10: The advanced-interface `fftw_mpi_plan_many_dft' additionally allows Chris@10: you to specify the block sizes for the first dimension (`block') of the Chris@10: n[0] x n[1] x n[2] x ... x n[d-1] input data and the first dimension Chris@10: (`tblock') of the n[1] x n[0] x n[2] x ... x n[d-1] transposed data Chris@10: (at intermediate steps of the transform, and for the output if Chris@10: `FFTW_TRANSPOSED_OUT' is specified in `flags'). These must be the same Chris@10: block sizes as were passed to the corresponding `local_size' function; Chris@10: you can pass `FFTW_MPI_DEFAULT_BLOCK' to use FFTW's default block size Chris@10: as in the basic interface. Also, the `howmany' parameter specifies Chris@10: that the transform is of contiguous `howmany'-tuples rather than Chris@10: individual complex numbers; this corresponds to the same parameter in Chris@10: the serial advanced interface (*note Advanced Complex DFTs::) with Chris@10: `stride = howmany' and `dist = 1'. Chris@10: Chris@10: MPI flags Chris@10: ......... Chris@10: Chris@10: The `flags' can be any of those for the serial FFTW (*note Planner Chris@10: Flags::), and in addition may include one or more of the following Chris@10: MPI-specific flags, which improve performance at the cost of changing Chris@10: the output or input data formats. Chris@10: Chris@10: * `FFTW_MPI_SCRAMBLED_OUT', `FFTW_MPI_SCRAMBLED_IN': valid for 1d Chris@10: transforms only, these flags indicate that the output/input of the Chris@10: transform are in an undocumented "scrambled" order. A forward Chris@10: `FFTW_MPI_SCRAMBLED_OUT' transform can be inverted by a backward Chris@10: `FFTW_MPI_SCRAMBLED_IN' (times the usual 1/N normalization). Chris@10: *Note One-dimensional distributions::. Chris@10: Chris@10: * `FFTW_MPI_TRANSPOSED_OUT', `FFTW_MPI_TRANSPOSED_IN': valid for Chris@10: multidimensional (`rnk > 1') transforms only, these flags specify Chris@10: that the output or input of an n[0] x n[1] x n[2] x ... x n[d-1] Chris@10: transform is transposed to n[1] x n[0] x n[2] x ... x n[d-1] . Chris@10: *Note Transposed distributions::. Chris@10: Chris@10: Chris@10: Real-data MPI DFTs Chris@10: .................. Chris@10: Chris@10: Plans for real-input/output (r2c/c2r) DFTs (*note Multi-dimensional MPI Chris@10: DFTs of Real Data::) are created by: Chris@10: Chris@10: fftw_plan fftw_mpi_plan_dft_r2c_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@10: double *in, fftw_complex *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_r2c_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@10: double *in, fftw_complex *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_r2c_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@10: double *in, fftw_complex *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_r2c(int rnk, const ptrdiff_t *n, Chris@10: double *in, fftw_complex *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_c2r_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@10: fftw_complex *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_c2r_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@10: fftw_complex *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_c2r_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@10: fftw_complex *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_dft_c2r(int rnk, const ptrdiff_t *n, Chris@10: fftw_complex *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: Chris@10: Similar to the serial interface (*note Real-data DFTs::), these Chris@10: transform logically n[0] x n[1] x n[2] x ... x n[d-1] real data Chris@10: to/from n[0] x n[1] x n[2] x ... x (n[d-1]/2 + 1) complex data, Chris@10: representing the non-redundant half of the conjugate-symmetry output of Chris@10: a real-input DFT (*note Multi-dimensional Transforms::). However, the Chris@10: real array must be stored within a padded n[0] x n[1] x n[2] x ... x [2 Chris@10: (n[d-1]/2 + 1)] Chris@10: Chris@10: array (much like the in-place serial r2c transforms, but here for Chris@10: out-of-place transforms as well). Currently, only multi-dimensional Chris@10: (`rnk > 1') r2c/c2r transforms are supported (requesting a plan for Chris@10: `rnk = 1' will yield `NULL'). As explained above (*note Chris@10: Multi-dimensional MPI DFTs of Real Data::), the data distribution of Chris@10: both the real and complex arrays is given by the `local_size' function Chris@10: called for the dimensions of the _complex_ array. Similar to the other Chris@10: planning functions, the input and output arrays are overwritten when Chris@10: the plan is created except in `FFTW_ESTIMATE' mode. Chris@10: Chris@10: As for the complex DFTs above, there is an advance interface that Chris@10: allows you to manually specify block sizes and to transform contiguous Chris@10: `howmany'-tuples of real/complex numbers: Chris@10: Chris@10: fftw_plan fftw_mpi_plan_many_dft_r2c Chris@10: (int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@10: ptrdiff_t iblock, ptrdiff_t oblock, Chris@10: double *in, fftw_complex *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_many_dft_c2r Chris@10: (int rnk, const ptrdiff_t *n, ptrdiff_t howmany, Chris@10: ptrdiff_t iblock, ptrdiff_t oblock, Chris@10: fftw_complex *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: Chris@10: MPI r2r transforms Chris@10: .................. Chris@10: Chris@10: There are corresponding plan-creation routines for r2r transforms Chris@10: (*note More DFTs of Real Data::), currently supporting multidimensional Chris@10: (`rnk > 1') transforms only (`rnk = 1' will yield a `NULL' plan): Chris@10: Chris@10: fftw_plan fftw_mpi_plan_r2r_2d(ptrdiff_t n0, ptrdiff_t n1, Chris@10: double *in, double *out, Chris@10: MPI_Comm comm, Chris@10: fftw_r2r_kind kind0, fftw_r2r_kind kind1, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_r2r_3d(ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t n2, Chris@10: double *in, double *out, Chris@10: MPI_Comm comm, Chris@10: fftw_r2r_kind kind0, fftw_r2r_kind kind1, fftw_r2r_kind kind2, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_r2r(int rnk, const ptrdiff_t *n, Chris@10: double *in, double *out, Chris@10: MPI_Comm comm, const fftw_r2r_kind *kind, Chris@10: unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_many_r2r(int rnk, const ptrdiff_t *n, Chris@10: ptrdiff_t iblock, ptrdiff_t oblock, Chris@10: double *in, double *out, Chris@10: MPI_Comm comm, const fftw_r2r_kind *kind, Chris@10: unsigned flags); Chris@10: Chris@10: The parameters are much the same as for the complex DFTs above, Chris@10: except that the arrays are of real numbers (and hence the outputs of the Chris@10: `local_size' data-distribution functions should be interpreted as Chris@10: counts of real rather than complex numbers). Also, the `kind' Chris@10: parameters specify the r2r kinds along each dimension as for the serial Chris@10: interface (*note Real-to-Real Transform Kinds::). *Note Other Chris@10: Multi-dimensional Real-data MPI Transforms::. Chris@10: Chris@10: MPI transposition Chris@10: ................. Chris@10: Chris@10: FFTW also provides routines to plan a transpose of a distributed `n0' Chris@10: by `n1' array of real numbers, or an array of `howmany'-tuples of real Chris@10: numbers with specified block sizes (*note FFTW MPI Transposes::): Chris@10: Chris@10: fftw_plan fftw_mpi_plan_transpose(ptrdiff_t n0, ptrdiff_t n1, Chris@10: double *in, double *out, Chris@10: MPI_Comm comm, unsigned flags); Chris@10: fftw_plan fftw_mpi_plan_many_transpose Chris@10: (ptrdiff_t n0, ptrdiff_t n1, ptrdiff_t howmany, Chris@10: ptrdiff_t block0, ptrdiff_t block1, Chris@10: double *in, double *out, MPI_Comm comm, unsigned flags); Chris@10: Chris@10: These plans are used with the `fftw_mpi_execute_r2r' new-array Chris@10: execute function (*note Using MPI Plans::), since they count as (rank Chris@10: zero) r2r plans from FFTW's perspective. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: MPI Wisdom Communication, Prev: MPI Plan Creation, Up: FFTW MPI Reference Chris@10: Chris@10: 6.12.6 MPI Wisdom Communication Chris@10: ------------------------------- Chris@10: Chris@10: To facilitate synchronizing wisdom among the different MPI processes, Chris@10: we provide two functions: Chris@10: Chris@10: void fftw_mpi_gather_wisdom(MPI_Comm comm); Chris@10: void fftw_mpi_broadcast_wisdom(MPI_Comm comm); Chris@10: Chris@10: The `fftw_mpi_gather_wisdom' function gathers all wisdom in the Chris@10: given communicator `comm' to the process of rank 0 in the communicator: Chris@10: that process obtains the union of all wisdom on all the processes. As Chris@10: a side effect, some other processes will gain additional wisdom from Chris@10: other processes, but only process 0 will gain the complete union. Chris@10: Chris@10: The `fftw_mpi_broadcast_wisdom' does the reverse: it exports wisdom Chris@10: from process 0 in `comm' to all other processes in the communicator, Chris@10: replacing any wisdom they currently have. Chris@10: Chris@10: *Note FFTW MPI Wisdom::. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW MPI Fortran Interface, Prev: FFTW MPI Reference, Up: Distributed-memory FFTW with MPI Chris@10: Chris@10: 6.13 FFTW MPI Fortran Interface Chris@10: =============================== Chris@10: Chris@10: The FFTW MPI interface is callable from modern Fortran compilers Chris@10: supporting the Fortran 2003 `iso_c_binding' standard for calling C Chris@10: functions. As described in *note Calling FFTW from Modern Fortran::, Chris@10: this means that you can directly call FFTW's C interface from Fortran Chris@10: with only minor changes in syntax. There are, however, a few things Chris@10: specific to the MPI interface to keep in mind: Chris@10: Chris@10: * Instead of including `fftw3.f03' as in *note Overview of Fortran Chris@10: interface::, you should `include 'fftw3-mpi.f03'' (after `use, Chris@10: intrinsic :: iso_c_binding' as before). The `fftw3-mpi.f03' file Chris@10: includes `fftw3.f03', so you should _not_ `include' them both Chris@10: yourself. (You will also want to include the MPI header file, Chris@10: usually via `include 'mpif.h'' or similar, although though this is Chris@10: not needed by `fftw3-mpi.f03' per se.) (To use the `fftwl_' `long Chris@10: double' extended-precision routines in supporting compilers, you Chris@10: should include `fftw3f-mpi.f03' in _addition_ to `fftw3-mpi.f03'. Chris@10: *Note Extended and quadruple precision in Fortran::.) Chris@10: Chris@10: * Because of the different storage conventions between C and Fortran, Chris@10: you reverse the order of your array dimensions when passing them to Chris@10: FFTW (*note Reversing array dimensions::). This is merely a Chris@10: difference in notation and incurs no performance overhead. Chris@10: However, it means that, whereas in C the _first_ dimension is Chris@10: distributed, in Fortran the _last_ dimension of your array is Chris@10: distributed. Chris@10: Chris@10: * In Fortran, communicators are stored as `integer' types; there is Chris@10: no `MPI_Comm' type, nor is there any way to access a C `MPI_Comm'. Chris@10: Fortunately, this is taken care of for you by the FFTW Fortran Chris@10: interface: whenever the C interface expects an `MPI_Comm' type, Chris@10: you should pass the Fortran communicator as an `integer'.(1) Chris@10: Chris@10: * Because you need to call the `local_size' function to find out how Chris@10: much space to allocate, and this may be _larger_ than the local Chris@10: portion of the array (*note MPI Data Distribution::), you should Chris@10: _always_ allocate your arrays dynamically using FFTW's allocation Chris@10: routines as described in *note Allocating aligned memory in Chris@10: Fortran::. (Coincidentally, this also provides the best Chris@10: performance by guaranteeding proper data alignment.) Chris@10: Chris@10: * Because all sizes in the MPI FFTW interface are declared as Chris@10: `ptrdiff_t' in C, you should use `integer(C_INTPTR_T)' in Fortran Chris@10: (*note FFTW Fortran type reference::). Chris@10: Chris@10: * In Fortran, because of the language semantics, we generally Chris@10: recommend using the new-array execute functions for all plans, Chris@10: even in the common case where you are executing the plan on the Chris@10: same arrays for which the plan was created (*note Plan execution Chris@10: in Fortran::). However, note that in the MPI interface these Chris@10: functions are changed: `fftw_execute_dft' becomes Chris@10: `fftw_mpi_execute_dft', etcetera. *Note Using MPI Plans::. Chris@10: Chris@10: Chris@10: For example, here is a Fortran code snippet to perform a distributed Chris@10: L x M complex DFT in-place. (This assumes you have already Chris@10: initialized MPI with `MPI_init' and have also performed `call Chris@10: fftw_mpi_init'.) Chris@10: Chris@10: use, intrinsic :: iso_c_binding Chris@10: include 'fftw3-mpi.f03' Chris@10: integer(C_INTPTR_T), parameter :: L = ... Chris@10: integer(C_INTPTR_T), parameter :: M = ... Chris@10: type(C_PTR) :: plan, cdata Chris@10: complex(C_DOUBLE_COMPLEX), pointer :: data(:,:) Chris@10: integer(C_INTPTR_T) :: i, j, alloc_local, local_M, local_j_offset Chris@10: Chris@10: ! get local data size and allocate (note dimension reversal) Chris@10: alloc_local = fftw_mpi_local_size_2d(M, L, MPI_COMM_WORLD, & Chris@10: local_M, local_j_offset) Chris@10: cdata = fftw_alloc_complex(alloc_local) Chris@10: call c_f_pointer(cdata, data, [L,local_M]) Chris@10: Chris@10: ! create MPI plan for in-place forward DFT (note dimension reversal) Chris@10: plan = fftw_mpi_plan_dft_2d(M, L, data, data, MPI_COMM_WORLD, & Chris@10: FFTW_FORWARD, FFTW_MEASURE) Chris@10: Chris@10: ! initialize data to some function my_function(i,j) Chris@10: do j = 1, local_M Chris@10: do i = 1, L Chris@10: data(i, j) = my_function(i, j + local_j_offset) Chris@10: end do Chris@10: end do Chris@10: Chris@10: ! compute transform (as many times as desired) Chris@10: call fftw_mpi_execute_dft(plan, data, data) Chris@10: Chris@10: call fftw_destroy_plan(plan) Chris@10: call fftw_free(cdata) Chris@10: Chris@10: Note that when we called `fftw_mpi_local_size_2d' and Chris@10: `fftw_mpi_plan_dft_2d' with the dimensions in reversed order, since a L Chris@10: x M Fortran array is viewed by FFTW in C as a M x L array. This Chris@10: means that the array was distributed over the `M' dimension, the local Chris@10: portion of which is a L x local_M array in Fortran. (You must _not_ Chris@10: use an `allocate' statement to allocate an L x local_M array, however; Chris@10: you must allocate `alloc_local' complex numbers, which may be greater Chris@10: than `L * local_M', in order to reserve space for intermediate steps of Chris@10: the transform.) Finally, we mention that because C's array indices are Chris@10: zero-based, the `local_j_offset' argument can conveniently be Chris@10: interpreted as an offset in the 1-based `j' index (rather than as a Chris@10: starting index as in C). Chris@10: Chris@10: If instead you had used the `ior(FFTW_MEASURE, Chris@10: FFTW_MPI_TRANSPOSED_OUT)' flag, the output of the transform would be a Chris@10: transposed M x local_L array, associated with the _same_ `cdata' Chris@10: allocation (since the transform is in-place), and which you could Chris@10: declare with: Chris@10: Chris@10: complex(C_DOUBLE_COMPLEX), pointer :: tdata(:,:) Chris@10: ... Chris@10: call c_f_pointer(cdata, tdata, [M,local_L]) Chris@10: Chris@10: where `local_L' would have been obtained by changing the Chris@10: `fftw_mpi_local_size_2d' call to: Chris@10: Chris@10: alloc_local = fftw_mpi_local_size_2d_transposed(M, L, MPI_COMM_WORLD, & Chris@10: local_M, local_j_offset, local_L, local_i_offset) Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) Technically, this is because you aren't actually calling the C Chris@10: functions directly. You are calling wrapper functions that translate Chris@10: the communicator with `MPI_Comm_f2c' before calling the ordinary C Chris@10: interface. This is all done transparently, however, since the Chris@10: `fftw3-mpi.f03' interface file renames the wrappers so that they are Chris@10: called in Fortran with the same names as the C interface functions. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Calling FFTW from Modern Fortran, Next: Calling FFTW from Legacy Fortran, Prev: Distributed-memory FFTW with MPI, Up: Top Chris@10: Chris@10: 7 Calling FFTW from Modern Fortran Chris@10: ********************************** Chris@10: Chris@10: Fortran 2003 standardized ways for Fortran code to call C libraries, Chris@10: and this allows us to support a direct translation of the FFTW C API Chris@10: into Fortran. Compared to the legacy Fortran 77 interface (*note Chris@10: Calling FFTW from Legacy Fortran::), this direct interface offers many Chris@10: advantages, especially compile-time type-checking and aligned memory Chris@10: allocation. As of this writing, support for these C interoperability Chris@10: features seems widespread, having been implemented in nearly all major Chris@10: Fortran compilers (e.g. GNU, Intel, IBM, Oracle/Solaris, Portland Chris@10: Group, NAG). Chris@10: Chris@10: This chapter documents that interface. For the most part, since this Chris@10: interface allows Fortran to call the C interface directly, the usage is Chris@10: identical to C translated to Fortran syntax. However, there are a few Chris@10: subtle points such as memory allocation, wisdom, and data types that Chris@10: deserve closer attention. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Overview of Fortran interface:: Chris@10: * Reversing array dimensions:: Chris@10: * FFTW Fortran type reference:: Chris@10: * Plan execution in Fortran:: Chris@10: * Allocating aligned memory in Fortran:: Chris@10: * Accessing the wisdom API from Fortran:: Chris@10: * Defining an FFTW module:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Overview of Fortran interface, Next: Reversing array dimensions, Prev: Calling FFTW from Modern Fortran, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.1 Overview of Fortran interface Chris@10: ================================= Chris@10: Chris@10: FFTW provides a file `fftw3.f03' that defines Fortran 2003 interfaces Chris@10: for all of its C routines, except for the MPI routines described Chris@10: elsewhere, which can be found in the same directory as `fftw3.h' (the C Chris@10: header file). In any Fortran subroutine where you want to use FFTW Chris@10: functions, you should begin with: Chris@10: Chris@10: use, intrinsic :: iso_c_binding Chris@10: include 'fftw3.f03' Chris@10: Chris@10: This includes the interface definitions and the standard Chris@10: `iso_c_binding' module (which defines the equivalents of C types). You Chris@10: can also put the FFTW functions into a module if you prefer (*note Chris@10: Defining an FFTW module::). Chris@10: Chris@10: At this point, you can now call anything in the FFTW C interface Chris@10: directly, almost exactly as in C other than minor changes in syntax. Chris@10: For example: Chris@10: Chris@10: type(C_PTR) :: plan Chris@10: complex(C_DOUBLE_COMPLEX), dimension(1024,1000) :: in, out Chris@10: plan = fftw_plan_dft_2d(1000,1024, in,out, FFTW_FORWARD,FFTW_ESTIMATE) Chris@10: ... Chris@10: call fftw_execute_dft(plan, in, out) Chris@10: ... Chris@10: call fftw_destroy_plan(plan) Chris@10: Chris@10: A few important things to keep in mind are: Chris@10: Chris@10: * FFTW plans are `type(C_PTR)'. Other C types are mapped in the Chris@10: obvious way via the `iso_c_binding' standard: `int' turns into Chris@10: `integer(C_INT)', `fftw_complex' turns into Chris@10: `complex(C_DOUBLE_COMPLEX)', `double' turns into `real(C_DOUBLE)', Chris@10: and so on. *Note FFTW Fortran type reference::. Chris@10: Chris@10: * Functions in C become functions in Fortran if they have a return Chris@10: value, and subroutines in Fortran otherwise. Chris@10: Chris@10: * The ordering of the Fortran array dimensions must be _reversed_ Chris@10: when they are passed to the FFTW plan creation, thanks to Chris@10: differences in array indexing conventions (*note Multi-dimensional Chris@10: Array Format::). This is _unlike_ the legacy Fortran interface Chris@10: (*note Fortran-interface routines::), which reversed the dimensions Chris@10: for you. *Note Reversing array dimensions::. Chris@10: Chris@10: * Using ordinary Fortran array declarations like this works, but may Chris@10: yield suboptimal performance because the data may not be not Chris@10: aligned to exploit SIMD instructions on modern proessors (*note Chris@10: SIMD alignment and fftw_malloc::). Better performance will often Chris@10: be obtained by allocating with `fftw_alloc'. *Note Allocating Chris@10: aligned memory in Fortran::. Chris@10: Chris@10: * Similar to the legacy Fortran interface (*note FFTW Execution in Chris@10: Fortran::), we currently recommend _not_ using `fftw_execute' but Chris@10: rather using the more specialized functions like Chris@10: `fftw_execute_dft' (*note New-array Execute Functions::). Chris@10: However, you should execute the plan on the `same arrays' as the Chris@10: ones for which you created the plan, unless you are especially Chris@10: careful. *Note Plan execution in Fortran::. To prevent you from Chris@10: using `fftw_execute' by mistake, the `fftw3.f03' file does not Chris@10: provide an `fftw_execute' interface declaration. Chris@10: Chris@10: * Multiple planner flags are combined with `ior' (equivalent to `|' Chris@10: in C). e.g. `FFTW_MEASURE | FFTW_DESTROY_INPUT' becomes Chris@10: `ior(FFTW_MEASURE, FFTW_DESTROY_INPUT)'. (You can also use `+' as Chris@10: long as you don't try to include a given flag more than once.) Chris@10: Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Extended and quadruple precision in Fortran:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Extended and quadruple precision in Fortran, Prev: Overview of Fortran interface, Up: Overview of Fortran interface Chris@10: Chris@10: 7.1.1 Extended and quadruple precision in Fortran Chris@10: ------------------------------------------------- Chris@10: Chris@10: If FFTW is compiled in `long double' (extended) precision (*note Chris@10: Installation and Customization::), you may be able to call the Chris@10: resulting `fftwl_' routines (*note Precision::) from Fortran if your Chris@10: compiler supports the `C_LONG_DOUBLE_COMPLEX' type code. Chris@10: Chris@10: Because some Fortran compilers do not support Chris@10: `C_LONG_DOUBLE_COMPLEX', the `fftwl_' declarations are segregated into Chris@10: a separate interface file `fftw3l.f03', which you should include _in Chris@10: addition_ to `fftw3.f03' (which declares precision-independent `FFTW_' Chris@10: constants): Chris@10: Chris@10: use, intrinsic :: iso_c_binding Chris@10: include 'fftw3.f03' Chris@10: include 'fftw3l.f03' Chris@10: Chris@10: We also support using the nonstandard `__float128' Chris@10: quadruple-precision type provided by recent versions of `gcc' on 32- Chris@10: and 64-bit x86 hardware (*note Installation and Customization::), using Chris@10: the corresponding `real(16)' and `complex(16)' types supported by Chris@10: `gfortran'. The quadruple-precision `fftwq_' functions (*note Chris@10: Precision::) are declared in a `fftw3q.f03' interface file, which Chris@10: should be included in addition to `fftw3l.f03', as above. You should Chris@10: also link with `-lfftw3q -lquadmath -lm' as in C. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Reversing array dimensions, Next: FFTW Fortran type reference, Prev: Overview of Fortran interface, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.2 Reversing array dimensions Chris@10: ============================== Chris@10: Chris@10: A minor annoyance in calling FFTW from Fortran is that FFTW's array Chris@10: dimensions are defined in the C convention (row-major order), while Chris@10: Fortran's array dimensions are the opposite convention (column-major Chris@10: order). *Note Multi-dimensional Array Format::. This is just a Chris@10: bookkeeping difference, with no effect on performance. The only Chris@10: consequence of this is that, whenever you create an FFTW plan for a Chris@10: multi-dimensional transform, you must always _reverse the ordering of Chris@10: the dimensions_. Chris@10: Chris@10: For example, consider the three-dimensional (L x M x N ) arrays: Chris@10: Chris@10: complex(C_DOUBLE_COMPLEX), dimension(L,M,N) :: in, out Chris@10: Chris@10: To plan a DFT for these arrays using `fftw_plan_dft_3d', you could Chris@10: do: Chris@10: Chris@10: plan = fftw_plan_dft_3d(N,M,L, in,out, FFTW_FORWARD,FFTW_ESTIMATE) Chris@10: Chris@10: That is, from FFTW's perspective this is a N x M x L array. _No Chris@10: data transposition need occur_, as this is _only notation_. Similarly, Chris@10: to use the more generic routine `fftw_plan_dft' with the same arrays, Chris@10: you could do: Chris@10: Chris@10: integer(C_INT), dimension(3) :: n = [N,M,L] Chris@10: plan = fftw_plan_dft_3d(3, n, in,out, FFTW_FORWARD,FFTW_ESTIMATE) Chris@10: Chris@10: Note, by the way, that this is different from the legacy Fortran Chris@10: interface (*note Fortran-interface routines::), which automatically Chris@10: reverses the order of the array dimension for you. Here, you are Chris@10: calling the C interface directly, so there is no "translation" layer. Chris@10: Chris@10: An important thing to keep in mind is the implication of this for Chris@10: multidimensional real-to-complex transforms (*note Multi-Dimensional Chris@10: DFTs of Real Data::). In C, a multidimensional real-to-complex DFT Chris@10: chops the last dimension roughly in half (N x M x L real input goes to Chris@10: N x M x L/2+1 complex output). In Fortran, because the array Chris@10: dimension notation is reversed, the _first_ dimension of the complex Chris@10: data is chopped roughly in half. For example consider the `r2c' Chris@10: transform of L x M x N real input in Fortran: Chris@10: Chris@10: type(C_PTR) :: plan Chris@10: real(C_DOUBLE), dimension(L,M,N) :: in Chris@10: complex(C_DOUBLE_COMPLEX), dimension(L/2+1,M,N) :: out Chris@10: plan = fftw_plan_dft_r2c_3d(N,M,L, in,out, FFTW_ESTIMATE) Chris@10: ... Chris@10: call fftw_execute_dft_r2c(plan, in, out) Chris@10: Chris@10: Alternatively, for an in-place r2c transform, as described in the C Chris@10: documentation we must _pad_ the _first_ dimension of the real input Chris@10: with an extra two entries (which are ignored by FFTW) so as to leave Chris@10: enough space for the complex output. The input is _allocated_ as a Chris@10: 2[L/2+1] x M x N array, even though only L x M x N of it is actually Chris@10: used. In this example, we will allocate the array as a pointer type, Chris@10: using `fftw_alloc' to ensure aligned memory for maximum performance Chris@10: (*note Allocating aligned memory in Fortran::); this also makes it easy Chris@10: to reference the same memory as both a real array and a complex array. Chris@10: Chris@10: real(C_DOUBLE), pointer :: in(:,:,:) Chris@10: complex(C_DOUBLE_COMPLEX), pointer :: out(:,:,:) Chris@10: type(C_PTR) :: plan, data Chris@10: data = fftw_alloc_complex(int((L/2+1) * M * N, C_SIZE_T)) Chris@10: call c_f_pointer(data, in, [2*(L/2+1),M,N]) Chris@10: call c_f_pointer(data, out, [L/2+1,M,N]) Chris@10: plan = fftw_plan_dft_r2c_3d(N,M,L, in,out, FFTW_ESTIMATE) Chris@10: ... Chris@10: call fftw_execute_dft_r2c(plan, in, out) Chris@10: ... Chris@10: call fftw_destroy_plan(plan) Chris@10: call fftw_free(data) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW Fortran type reference, Next: Plan execution in Fortran, Prev: Reversing array dimensions, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.3 FFTW Fortran type reference Chris@10: =============================== Chris@10: Chris@10: The following are the most important type correspondences between the C Chris@10: interface and Fortran: Chris@10: Chris@10: * Plans (`fftw_plan' and variants) are `type(C_PTR)' (i.e. an opaque Chris@10: pointer). Chris@10: Chris@10: * The C floating-point types `double', `float', and `long double' Chris@10: correspond to `real(C_DOUBLE)', `real(C_FLOAT)', and Chris@10: `real(C_LONG_DOUBLE)', respectively. The C complex types Chris@10: `fftw_complex', `fftwf_complex', and `fftwl_complex' correspond in Chris@10: Fortran to `complex(C_DOUBLE_COMPLEX)', Chris@10: `complex(C_FLOAT_COMPLEX)', and `complex(C_LONG_DOUBLE_COMPLEX)', Chris@10: respectively. Just as in C (*note Precision::), the FFTW Chris@10: subroutines and types are prefixed with `fftw_', `fftwf_', and Chris@10: `fftwl_' for the different precisions, and link to different Chris@10: libraries (`-lfftw3', `-lfftw3f', and `-lfftw3l' on Unix), but use Chris@10: the _same_ include file `fftw3.f03' and the _same_ constants (all Chris@10: of which begin with `FFTW_'). The exception is `long double' Chris@10: precision, for which you should _also_ include `fftw3l.f03' (*note Chris@10: Extended and quadruple precision in Fortran::). Chris@10: Chris@10: * The C integer types `int' and `unsigned' (used for planner flags) Chris@10: become `integer(C_INT)'. The C integer type `ptrdiff_t' (e.g. in Chris@10: the *note 64-bit Guru Interface::) becomes `integer(C_INTPTR_T)', Chris@10: and `size_t' (in `fftw_malloc' etc.) becomes `integer(C_SIZE_T)'. Chris@10: Chris@10: * The `fftw_r2r_kind' type (*note Real-to-Real Transform Kinds::) Chris@10: becomes `integer(C_FFTW_R2R_KIND)'. The various constant values Chris@10: of the C enumerated type (`FFTW_R2HC' etc.) become simply integer Chris@10: constants of the same names in Fortran. Chris@10: Chris@10: * Numeric array pointer arguments (e.g. `double *') become Chris@10: `dimension(*), intent(out)' arrays of the same type, or Chris@10: `dimension(*), intent(in)' if they are pointers to constant data Chris@10: (e.g. `const int *'). There are a few exceptions where numeric Chris@10: pointers refer to scalar outputs (e.g. for `fftw_flops'), in which Chris@10: case they are `intent(out)' scalar arguments in Fortran too. For Chris@10: the new-array execute functions (*note New-array Execute Chris@10: Functions::), the input arrays are declared `dimension(*), Chris@10: intent(inout)', since they can be modified in the case of in-place Chris@10: or `FFTW_DESTROY_INPUT' transforms. Chris@10: Chris@10: * Pointer _return_ values (e.g `double *') become `type(C_PTR)'. Chris@10: (If they are pointers to arrays, as for `fftw_alloc_real', you can Chris@10: convert them back to Fortran array pointers with the standard Chris@10: intrinsic function `c_f_pointer'.) Chris@10: Chris@10: * The `fftw_iodim' type in the guru interface (*note Guru vector and Chris@10: transform sizes::) becomes `type(fftw_iodim)' in Fortran, a Chris@10: derived data type (the Fortran analogue of C's `struct') with Chris@10: three `integer(C_INT)' components: `n', `is', and `os', with the Chris@10: same meanings as in C. The `fftw_iodim64' type in the 64-bit guru Chris@10: interface (*note 64-bit Guru Interface::) is the same, except that Chris@10: its components are of type `integer(C_INTPTR_T)'. Chris@10: Chris@10: * Using the wisdom import/export functions from Fortran is a bit Chris@10: tricky, and is discussed in *note Accessing the wisdom API from Chris@10: Fortran::. In brief, the `FILE *' arguments map to `type(C_PTR)', Chris@10: `const char *' to `character(C_CHAR), dimension(*), intent(in)' Chris@10: (null-terminated!), and the generic read-char/write-char functions Chris@10: map to `type(C_FUNPTR)'. Chris@10: Chris@10: Chris@10: You may be wondering if you need to search-and-replace Chris@10: `real(kind(0.0d0))' (or whatever your favorite Fortran spelling of Chris@10: "double precision" is) with `real(C_DOUBLE)' everywhere in your Chris@10: program, and similarly for `complex' and `integer' types. The answer Chris@10: is no; you can still use your existing types. As long as these types Chris@10: match their C counterparts, things should work without a hitch. The Chris@10: worst that can happen, e.g. in the (unlikely) event of a system where Chris@10: `real(kind(0.0d0))' is different from `real(C_DOUBLE)', is that the Chris@10: compiler will give you a type-mismatch error. That is, if you don't Chris@10: use the `iso_c_binding' kinds you need to accept at least the Chris@10: theoretical possibility of having to change your code in response to Chris@10: compiler errors on some future machine, but you don't need to worry Chris@10: about silently compiling incorrect code that yields runtime errors. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Plan execution in Fortran, Next: Allocating aligned memory in Fortran, Prev: FFTW Fortran type reference, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.4 Plan execution in Fortran Chris@10: ============================= Chris@10: Chris@10: In C, in order to use a plan, one normally calls `fftw_execute', which Chris@10: executes the plan to perform the transform on the input/output arrays Chris@10: passed when the plan was created (*note Using Plans::). The Chris@10: corresponding subroutine call in modern Fortran is: Chris@10: call fftw_execute(plan) Chris@10: Chris@10: However, we have had reports that this causes problems with some Chris@10: recent optimizing Fortran compilers. The problem is, because the Chris@10: input/output arrays are not passed as explicit arguments to Chris@10: `fftw_execute', the semantics of Fortran (unlike C) allow the compiler Chris@10: to assume that the input/output arrays are not changed by Chris@10: `fftw_execute'. As a consequence, certain compilers end up Chris@10: repositioning the call to `fftw_execute', assuming incorrectly that it Chris@10: does nothing to the arrays. Chris@10: Chris@10: There are various workarounds to this, but the safest and simplest Chris@10: thing is to not use `fftw_execute' in Fortran. Instead, use the Chris@10: functions described in *note New-array Execute Functions::, which take Chris@10: the input/output arrays as explicit arguments. For example, if the Chris@10: plan is for a complex-data DFT and was created for the arrays `in' and Chris@10: `out', you would do: Chris@10: call fftw_execute_dft(plan, in, out) Chris@10: Chris@10: There are a few things to be careful of, however: Chris@10: Chris@10: * You must use the correct type of execute function, matching the way Chris@10: the plan was created. Complex DFT plans should use Chris@10: `fftw_execute_dft', Real-input (r2c) DFT plans should use use Chris@10: `fftw_execute_dft_r2c', and real-output (c2r) DFT plans should use Chris@10: `fftw_execute_dft_c2r'. The various r2r plans should use Chris@10: `fftw_execute_r2r'. Fortunately, if you use the wrong one you Chris@10: will get a compile-time type-mismatch error (unlike legacy Chris@10: Fortran). Chris@10: Chris@10: * You should normally pass the same input/output arrays that were Chris@10: used when creating the plan. This is always safe. Chris@10: Chris@10: * _If_ you pass _different_ input/output arrays compared to those Chris@10: used when creating the plan, you must abide by all the Chris@10: restrictions of the new-array execute functions (*note New-array Chris@10: Execute Functions::). The most tricky of these is the requirement Chris@10: that the new arrays have the same alignment as the original Chris@10: arrays; the best (and possibly only) way to guarantee this is to Chris@10: use the `fftw_alloc' functions to allocate your arrays (*note Chris@10: Allocating aligned memory in Fortran::). Alternatively, you can Chris@10: use the `FFTW_UNALIGNED' flag when creating the plan, in which Chris@10: case the plan does not depend on the alignment, but this may Chris@10: sacrifice substantial performance on architectures (like x86) with Chris@10: SIMD instructions (*note SIMD alignment and fftw_malloc::). Chris@10: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Allocating aligned memory in Fortran, Next: Accessing the wisdom API from Fortran, Prev: Plan execution in Fortran, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.5 Allocating aligned memory in Fortran Chris@10: ======================================== Chris@10: Chris@10: In order to obtain maximum performance in FFTW, you should store your Chris@10: data in arrays that have been specially aligned in memory (*note SIMD Chris@10: alignment and fftw_malloc::). Enforcing alignment also permits you to Chris@10: safely use the new-array execute functions (*note New-array Execute Chris@10: Functions::) to apply a given plan to more than one pair of in/out Chris@10: arrays. Unfortunately, standard Fortran arrays do _not_ provide any Chris@10: alignment guarantees. The _only_ way to allocate aligned memory in Chris@10: standard Fortran is to allocate it with an external C function, like Chris@10: the `fftw_alloc_real' and `fftw_alloc_complex' functions. Fortunately, Chris@10: Fortran 2003 provides a simple way to associate such allocated memory Chris@10: with a standard Fortran array pointer that you can then use normally. Chris@10: Chris@10: We therefore recommend allocating all your input/output arrays using Chris@10: the following technique: Chris@10: Chris@10: 1. Declare a `pointer', `arr', to your array of the desired type and Chris@10: dimensions. For example, `real(C_DOUBLE), pointer :: a(:,:)' for Chris@10: a 2d real array, or `complex(C_DOUBLE_COMPLEX), pointer :: Chris@10: a(:,:,:)' for a 3d complex array. Chris@10: Chris@10: 2. The number of elements to allocate must be an `integer(C_SIZE_T)'. Chris@10: You can either declare a variable of this type, e.g. Chris@10: `integer(C_SIZE_T) :: sz', to store the number of elements to Chris@10: allocate, or you can use the `int(..., C_SIZE_T)' intrinsic Chris@10: function. e.g. set `sz = L * M * N' or use `int(L * M * N, Chris@10: C_SIZE_T)' for an L x M x N array. Chris@10: Chris@10: 3. Declare a `type(C_PTR) :: p' to hold the return value from FFTW's Chris@10: allocation routine. Set `p = fftw_alloc_real(sz)' for a real Chris@10: array, or `p = fftw_alloc_complex(sz)' for a complex array. Chris@10: Chris@10: 4. Associate your pointer `arr' with the allocated memory `p' using Chris@10: the standard `c_f_pointer' subroutine: `call c_f_pointer(p, arr, Chris@10: [...dimensions...])', where `[...dimensions...])' are an array of Chris@10: the dimensions of the array (in the usual Fortran order). e.g. Chris@10: `call c_f_pointer(p, arr, [L,M,N])' for an L x M x N array. Chris@10: (Alternatively, you can omit the dimensions argument if you Chris@10: specified the shape explicitly when declaring `arr'.) You can now Chris@10: use `arr' as a usual multidimensional array. Chris@10: Chris@10: 5. When you are done using the array, deallocate the memory by `call Chris@10: fftw_free(p)' on `p'. Chris@10: Chris@10: Chris@10: For example, here is how we would allocate an L x M 2d real array: Chris@10: Chris@10: real(C_DOUBLE), pointer :: arr(:,:) Chris@10: type(C_PTR) :: p Chris@10: p = fftw_alloc_real(int(L * M, C_SIZE_T)) Chris@10: call c_f_pointer(p, arr, [L,M]) Chris@10: _...use arr and arr(i,j) as usual..._ Chris@10: call fftw_free(p) Chris@10: Chris@10: and here is an L x M x N 3d complex array: Chris@10: Chris@10: complex(C_DOUBLE_COMPLEX), pointer :: arr(:,:,:) Chris@10: type(C_PTR) :: p Chris@10: p = fftw_alloc_complex(int(L * M * N, C_SIZE_T)) Chris@10: call c_f_pointer(p, arr, [L,M,N]) Chris@10: _...use arr and arr(i,j,k) as usual..._ Chris@10: call fftw_free(p) Chris@10: Chris@10: See *note Reversing array dimensions:: for an example allocating a Chris@10: single array and associating both real and complex array pointers with Chris@10: it, for in-place real-to-complex transforms. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Accessing the wisdom API from Fortran, Next: Defining an FFTW module, Prev: Allocating aligned memory in Fortran, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.6 Accessing the wisdom API from Fortran Chris@10: ========================================= Chris@10: Chris@10: As explained in *note Words of Wisdom-Saving Plans::, FFTW provides a Chris@10: "wisdom" API for saving plans to disk so that they can be recreated Chris@10: quickly. The C API for exporting (*note Wisdom Export::) and importing Chris@10: (*note Wisdom Import::) wisdom is somewhat tricky to use from Fortran, Chris@10: however, because of differences in file I/O and string types between C Chris@10: and Fortran. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Wisdom File Export/Import from Fortran:: Chris@10: * Wisdom String Export/Import from Fortran:: Chris@10: * Wisdom Generic Export/Import from Fortran:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom File Export/Import from Fortran, Next: Wisdom String Export/Import from Fortran, Prev: Accessing the wisdom API from Fortran, Up: Accessing the wisdom API from Fortran Chris@10: Chris@10: 7.6.1 Wisdom File Export/Import from Fortran Chris@10: -------------------------------------------- Chris@10: Chris@10: The easiest way to export and import wisdom is to do so using Chris@10: `fftw_export_wisdom_to_filename' and `fftw_wisdom_from_filename'. The Chris@10: only trick is that these require you to pass a C string, which is an Chris@10: array of type `CHARACTER(C_CHAR)' that is terminated by `C_NULL_CHAR'. Chris@10: You can call them like this: Chris@10: Chris@10: integer(C_INT) :: ret Chris@10: ret = fftw_export_wisdom_to_filename(C_CHAR_'my_wisdom.dat' // C_NULL_CHAR) Chris@10: if (ret .eq. 0) stop 'error exporting wisdom to file' Chris@10: ret = fftw_import_wisdom_from_filename(C_CHAR_'my_wisdom.dat' // C_NULL_CHAR) Chris@10: if (ret .eq. 0) stop 'error importing wisdom from file' Chris@10: Chris@10: Note that prepending `C_CHAR_' is needed to specify that the literal Chris@10: string is of kind `C_CHAR', and we null-terminate the string by Chris@10: appending `// C_NULL_CHAR'. These functions return an `integer(C_INT)' Chris@10: (`ret') which is `0' if an error occurred during export/import and Chris@10: nonzero otherwise. Chris@10: Chris@10: It is also possible to use the lower-level routines Chris@10: `fftw_export_wisdom_to_file' and `fftw_import_wisdom_from_file', which Chris@10: accept parameters of the C type `FILE*', expressed in Fortran as Chris@10: `type(C_PTR)'. However, you are then responsible for creating the Chris@10: `FILE*' yourself. You can do this by using `iso_c_binding' to define Chris@10: Fortran intefaces for the C library functions `fopen' and `fclose', Chris@10: which is a bit strange in Fortran but workable. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom String Export/Import from Fortran, Next: Wisdom Generic Export/Import from Fortran, Prev: Wisdom File Export/Import from Fortran, Up: Accessing the wisdom API from Fortran Chris@10: Chris@10: 7.6.2 Wisdom String Export/Import from Fortran Chris@10: ---------------------------------------------- Chris@10: Chris@10: Dealing with FFTW's C string export/import is a bit more painful. In Chris@10: particular, the `fftw_export_wisdom_to_string' function requires you to Chris@10: deal with a dynamically allocated C string. To get its length, you Chris@10: must define an interface to the C `strlen' function, and to deallocate Chris@10: it you must define an interface to C `free': Chris@10: Chris@10: use, intrinsic :: iso_c_binding Chris@10: interface Chris@10: integer(C_INT) function strlen(s) bind(C, name='strlen') Chris@10: import Chris@10: type(C_PTR), value :: s Chris@10: end function strlen Chris@10: subroutine free(p) bind(C, name='free') Chris@10: import Chris@10: type(C_PTR), value :: p Chris@10: end subroutine free Chris@10: end interface Chris@10: Chris@10: Given these definitions, you can then export wisdom to a Fortran Chris@10: character array: Chris@10: Chris@10: character(C_CHAR), pointer :: s(:) Chris@10: integer(C_SIZE_T) :: slen Chris@10: type(C_PTR) :: p Chris@10: p = fftw_export_wisdom_to_string() Chris@10: if (.not. c_associated(p)) stop 'error exporting wisdom' Chris@10: slen = strlen(p) Chris@10: call c_f_pointer(p, s, [slen+1]) Chris@10: ... Chris@10: call free(p) Chris@10: Chris@10: Note that `slen' is the length of the C string, but the length of Chris@10: the array is `slen+1' because it includes the terminating null Chris@10: character. (You can omit the `+1' if you don't want Fortran to know Chris@10: about the null character.) The standard `c_associated' function checks Chris@10: whether `p' is a null pointer, which is returned by Chris@10: `fftw_export_wisdom_to_string' if there was an error. Chris@10: Chris@10: To import wisdom from a string, use `fftw_import_wisdom_from_string' Chris@10: as usual; note that the argument of this function must be a Chris@10: `character(C_CHAR)' that is terminated by the `C_NULL_CHAR' character, Chris@10: like the `s' array above. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom Generic Export/Import from Fortran, Prev: Wisdom String Export/Import from Fortran, Up: Accessing the wisdom API from Fortran Chris@10: Chris@10: 7.6.3 Wisdom Generic Export/Import from Fortran Chris@10: ----------------------------------------------- Chris@10: Chris@10: The most generic wisdom export/import functions allow you to provide an Chris@10: arbitrary callback function to read/write one character at a time in Chris@10: any way you want. However, your callback function must be written in a Chris@10: special way, using the `bind(C)' attribute to be passed to a C Chris@10: interface. Chris@10: Chris@10: In particular, to call the generic wisdom export function Chris@10: `fftw_export_wisdom', you would write a callback subroutine of the form: Chris@10: Chris@10: subroutine my_write_char(c, p) bind(C) Chris@10: use, intrinsic :: iso_c_binding Chris@10: character(C_CHAR), value :: c Chris@10: type(C_PTR), value :: p Chris@10: _...write c..._ Chris@10: end subroutine my_write_char Chris@10: Chris@10: Given such a subroutine (along with the corresponding interface Chris@10: definition), you could then export wisdom using: Chris@10: Chris@10: call fftw_export_wisdom(c_funloc(my_write_char), p) Chris@10: Chris@10: The standard `c_funloc' intrinsic converts a Fortran `bind(C)' Chris@10: subroutine into a C function pointer. The parameter `p' is a Chris@10: `type(C_PTR)' to any arbitrary data that you want to pass to Chris@10: `my_write_char' (or `C_NULL_PTR' if none). (Note that you can get a C Chris@10: pointer to Fortran data using the intrinsic `c_loc', and convert it Chris@10: back to a Fortran pointer in `my_write_char' using `c_f_pointer'.) Chris@10: Chris@10: Similarly, to use the generic `fftw_import_wisdom', you would define Chris@10: a callback function of the form: Chris@10: Chris@10: integer(C_INT) function my_read_char(p) bind(C) Chris@10: use, intrinsic :: iso_c_binding Chris@10: type(C_PTR), value :: p Chris@10: character :: c Chris@10: _...read a character c..._ Chris@10: my_read_char = ichar(c, C_INT) Chris@10: end function my_read_char Chris@10: Chris@10: .... Chris@10: Chris@10: integer(C_INT) :: ret Chris@10: ret = fftw_import_wisdom(c_funloc(my_read_char), p) Chris@10: if (ret .eq. 0) stop 'error importing wisdom' Chris@10: Chris@10: Your function can return `-1' if the end of the input is reached. Chris@10: Again, `p' is an arbitrary `type(C_PTR' that is passed through to your Chris@10: function. `fftw_import_wisdom' returns `0' if an error occurred and Chris@10: nonzero otherwise. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Defining an FFTW module, Prev: Accessing the wisdom API from Fortran, Up: Calling FFTW from Modern Fortran Chris@10: Chris@10: 7.7 Defining an FFTW module Chris@10: =========================== Chris@10: Chris@10: Rather than using the `include' statement to include the `fftw3.f03' Chris@10: interface file in any subroutine where you want to use FFTW, you might Chris@10: prefer to define an FFTW Fortran module. FFTW does not install itself Chris@10: as a module, primarily because `fftw3.f03' can be shared between Chris@10: different Fortran compilers while modules (in general) cannot. Chris@10: However, it is trivial to define your own FFTW module if you want. Chris@10: Just create a file containing: Chris@10: Chris@10: module FFTW3 Chris@10: use, intrinsic :: iso_c_binding Chris@10: include 'fftw3.f03' Chris@10: end module Chris@10: Chris@10: Compile this file into a module as usual for your compiler (e.g. with Chris@10: `gfortran -c' you will get a file `fftw3.mod'). Now, instead of Chris@10: `include 'fftw3.f03'', whenever you want to use FFTW routines you can Chris@10: just do: Chris@10: Chris@10: use FFTW3 Chris@10: Chris@10: as usual for Fortran modules. (You still need to link to the FFTW Chris@10: library, of course.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Calling FFTW from Legacy Fortran, Next: Upgrading from FFTW version 2, Prev: Calling FFTW from Modern Fortran, Up: Top Chris@10: Chris@10: 8 Calling FFTW from Legacy Fortran Chris@10: ********************************** Chris@10: Chris@10: This chapter describes the interface to FFTW callable by Fortran code Chris@10: in older compilers not supporting the Fortran 2003 C interoperability Chris@10: features (*note Calling FFTW from Modern Fortran::). This interface Chris@10: has the major disadvantage that it is not type-checked, so if you Chris@10: mistake the argument types or ordering then your program will not have Chris@10: any compiler errors, and will likely crash at runtime. So, greater Chris@10: care is needed. Also, technically interfacing older Fortran versions Chris@10: to C is nonstandard, but in practice we have found that the techniques Chris@10: used in this chapter have worked with all known Fortran compilers for Chris@10: many years. Chris@10: Chris@10: The legacy Fortran interface differs from the C interface only in the Chris@10: prefix (`dfftw_' instead of `fftw_' in double precision) and a few Chris@10: other minor details. This Fortran interface is included in the FFTW Chris@10: libraries by default, unless a Fortran compiler isn't found on your Chris@10: system or `--disable-fortran' is included in the `configure' flags. We Chris@10: assume here that the reader is already familiar with the usage of FFTW Chris@10: in C, as described elsewhere in this manual. Chris@10: Chris@10: The MPI parallel interface to FFTW is _not_ currently available to Chris@10: legacy Fortran. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Fortran-interface routines:: Chris@10: * FFTW Constants in Fortran:: Chris@10: * FFTW Execution in Fortran:: Chris@10: * Fortran Examples:: Chris@10: * Wisdom of Fortran?:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Fortran-interface routines, Next: FFTW Constants in Fortran, Prev: Calling FFTW from Legacy Fortran, Up: Calling FFTW from Legacy Fortran Chris@10: Chris@10: 8.1 Fortran-interface routines Chris@10: ============================== Chris@10: Chris@10: Nearly all of the FFTW functions have Fortran-callable equivalents. Chris@10: The name of the legacy Fortran routine is the same as that of the Chris@10: corresponding C routine, but with the `fftw_' prefix replaced by Chris@10: `dfftw_'.(1) The single and long-double precision versions use Chris@10: `sfftw_' and `lfftw_', respectively, instead of `fftwf_' and `fftwl_'; Chris@10: quadruple precision (`real*16') is available on some systems as Chris@10: `fftwq_' (*note Precision::). (Note that `long double' on x86 hardware Chris@10: is usually at most 80-bit extended precision, _not_ quadruple Chris@10: precision.) Chris@10: Chris@10: For the most part, all of the arguments to the functions are the Chris@10: same, with the following exceptions: Chris@10: Chris@10: * `plan' variables (what would be of type `fftw_plan' in C), must be Chris@10: declared as a type that is at least as big as a pointer (address) Chris@10: on your machine. We recommend using `integer*8' everywhere, since Chris@10: this should always be big enough. Chris@10: Chris@10: * Any function that returns a value (e.g. `fftw_plan_dft') is Chris@10: converted into a _subroutine_. The return value is converted into Chris@10: an additional _first_ parameter of this subroutine.(2) Chris@10: Chris@10: * The Fortran routines expect multi-dimensional arrays to be in Chris@10: _column-major_ order, which is the ordinary format of Fortran Chris@10: arrays (*note Multi-dimensional Array Format::). They do this Chris@10: transparently and costlessly simply by reversing the order of the Chris@10: dimensions passed to FFTW, but this has one important consequence Chris@10: for multi-dimensional real-complex transforms, discussed below. Chris@10: Chris@10: * Wisdom import and export is somewhat more tricky because one cannot Chris@10: easily pass files or strings between C and Fortran; see *note Chris@10: Wisdom of Fortran?::. Chris@10: Chris@10: * Legacy Fortran cannot use the `fftw_malloc' dynamic-allocation Chris@10: routine. If you want to exploit the SIMD FFTW (*note SIMD Chris@10: alignment and fftw_malloc::), you'll need to figure out some other Chris@10: way to ensure that your arrays are at least 16-byte aligned. Chris@10: Chris@10: * Since Fortran 77 does not have data structures, the `fftw_iodim' Chris@10: structure from the guru interface (*note Guru vector and transform Chris@10: sizes::) must be split into separate arguments. In particular, any Chris@10: `fftw_iodim' array arguments in the C guru interface become three Chris@10: integer array arguments (`n', `is', and `os') in the Fortran guru Chris@10: interface, all of whose lengths should be equal to the Chris@10: corresponding `rank' argument. Chris@10: Chris@10: * The guru planner interface in Fortran does _not_ do any automatic Chris@10: translation between column-major and row-major; you are responsible Chris@10: for setting the strides etcetera to correspond to your Fortran Chris@10: arrays. However, as a slight bug that we are preserving for Chris@10: backwards compatibility, the `plan_guru_r2r' in Fortran _does_ Chris@10: reverse the order of its `kind' array parameter, so the `kind' Chris@10: array of that routine should be in the reverse of the order of the Chris@10: iodim arrays (see above). Chris@10: Chris@10: Chris@10: In general, you should take care to use Fortran data types that Chris@10: correspond to (i.e. are the same size as) the C types used by FFTW. In Chris@10: practice, this correspondence is usually straightforward (i.e. Chris@10: `integer' corresponds to `int', `real' corresponds to `float', Chris@10: etcetera). The native Fortran double/single-precision complex type Chris@10: should be compatible with `fftw_complex'/`fftwf_complex'. Such simple Chris@10: correspondences are assumed in the examples below. Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) Technically, Fortran 77 identifiers are not allowed to have more Chris@10: than 6 characters, nor may they contain underscores. Any compiler that Chris@10: enforces this limitation doesn't deserve to link to FFTW. Chris@10: Chris@10: (2) The reason for this is that some Fortran implementations seem to Chris@10: have trouble with C function return values, and vice versa. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW Constants in Fortran, Next: FFTW Execution in Fortran, Prev: Fortran-interface routines, Up: Calling FFTW from Legacy Fortran Chris@10: Chris@10: 8.2 FFTW Constants in Fortran Chris@10: ============================= Chris@10: Chris@10: When creating plans in FFTW, a number of constants are used to specify Chris@10: options, such as `FFTW_MEASURE' or `FFTW_ESTIMATE'. The same constants Chris@10: must be used with the wrapper routines, but of course the C header Chris@10: files where the constants are defined can't be incorporated directly Chris@10: into Fortran code. Chris@10: Chris@10: Instead, we have placed Fortran equivalents of the FFTW constant Chris@10: definitions in the file `fftw3.f', which can be found in the same Chris@10: directory as `fftw3.h'. If your Fortran compiler supports a Chris@10: preprocessor of some sort, you should be able to `include' or Chris@10: `#include' this file; otherwise, you can paste it directly into your Chris@10: code. Chris@10: Chris@10: In C, you combine different flags (like `FFTW_PRESERVE_INPUT' and Chris@10: `FFTW_MEASURE') using the ``|'' operator; in Fortran you should just Chris@10: use ``+''. (Take care not to add in the same flag more than once, Chris@10: though. Alternatively, you can use the `ior' intrinsic function Chris@10: standardized in Fortran 95.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: FFTW Execution in Fortran, Next: Fortran Examples, Prev: FFTW Constants in Fortran, Up: Calling FFTW from Legacy Fortran Chris@10: Chris@10: 8.3 FFTW Execution in Fortran Chris@10: ============================= Chris@10: Chris@10: In C, in order to use a plan, one normally calls `fftw_execute', which Chris@10: executes the plan to perform the transform on the input/output arrays Chris@10: passed when the plan was created (*note Using Plans::). The Chris@10: corresponding subroutine call in legacy Fortran is: Chris@10: call dfftw_execute(plan) Chris@10: Chris@10: However, we have had reports that this causes problems with some Chris@10: recent optimizing Fortran compilers. The problem is, because the Chris@10: input/output arrays are not passed as explicit arguments to Chris@10: `dfftw_execute', the semantics of Fortran (unlike C) allow the compiler Chris@10: to assume that the input/output arrays are not changed by Chris@10: `dfftw_execute'. As a consequence, certain compilers end up optimizing Chris@10: out or repositioning the call to `dfftw_execute', assuming incorrectly Chris@10: that it does nothing. Chris@10: Chris@10: There are various workarounds to this, but the safest and simplest Chris@10: thing is to not use `dfftw_execute' in Fortran. Instead, use the Chris@10: functions described in *note New-array Execute Functions::, which take Chris@10: the input/output arrays as explicit arguments. For example, if the Chris@10: plan is for a complex-data DFT and was created for the arrays `in' and Chris@10: `out', you would do: Chris@10: call dfftw_execute_dft(plan, in, out) Chris@10: Chris@10: There are a few things to be careful of, however: Chris@10: Chris@10: * You must use the correct type of execute function, matching the way Chris@10: the plan was created. Complex DFT plans should use Chris@10: `dfftw_execute_dft', Real-input (r2c) DFT plans should use use Chris@10: `dfftw_execute_dft_r2c', and real-output (c2r) DFT plans should Chris@10: use `dfftw_execute_dft_c2r'. The various r2r plans should use Chris@10: `dfftw_execute_r2r'. Chris@10: Chris@10: * You should normally pass the same input/output arrays that were Chris@10: used when creating the plan. This is always safe. Chris@10: Chris@10: * _If_ you pass _different_ input/output arrays compared to those Chris@10: used when creating the plan, you must abide by all the Chris@10: restrictions of the new-array execute functions (*note New-array Chris@10: Execute Functions::). The most difficult of these, in Fortran, is Chris@10: the requirement that the new arrays have the same alignment as the Chris@10: original arrays, because there seems to be no way in legacy Chris@10: Fortran to obtain guaranteed-aligned arrays (analogous to Chris@10: `fftw_malloc' in C). You can, of course, use the `FFTW_UNALIGNED' Chris@10: flag when creating the plan, in which case the plan does not Chris@10: depend on the alignment, but this may sacrifice substantial Chris@10: performance on architectures (like x86) with SIMD instructions Chris@10: (*note SIMD alignment and fftw_malloc::). Chris@10: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Fortran Examples, Next: Wisdom of Fortran?, Prev: FFTW Execution in Fortran, Up: Calling FFTW from Legacy Fortran Chris@10: Chris@10: 8.4 Fortran Examples Chris@10: ==================== Chris@10: Chris@10: In C, you might have something like the following to transform a Chris@10: one-dimensional complex array: Chris@10: Chris@10: fftw_complex in[N], out[N]; Chris@10: fftw_plan plan; Chris@10: Chris@10: plan = fftw_plan_dft_1d(N,in,out,FFTW_FORWARD,FFTW_ESTIMATE); Chris@10: fftw_execute(plan); Chris@10: fftw_destroy_plan(plan); Chris@10: Chris@10: In Fortran, you would use the following to accomplish the same thing: Chris@10: Chris@10: double complex in, out Chris@10: dimension in(N), out(N) Chris@10: integer*8 plan Chris@10: Chris@10: call dfftw_plan_dft_1d(plan,N,in,out,FFTW_FORWARD,FFTW_ESTIMATE) Chris@10: call dfftw_execute_dft(plan, in, out) Chris@10: call dfftw_destroy_plan(plan) Chris@10: Chris@10: Notice how all routines are called as Fortran subroutines, and the Chris@10: plan is returned via the first argument to `dfftw_plan_dft_1d'. Notice Chris@10: also that we changed `fftw_execute' to `dfftw_execute_dft' (*note FFTW Chris@10: Execution in Fortran::). To do the same thing, but using 8 threads in Chris@10: parallel (*note Multi-threaded FFTW::), you would simply prefix these Chris@10: calls with: Chris@10: Chris@10: integer iret Chris@10: call dfftw_init_threads(iret) Chris@10: call dfftw_plan_with_nthreads(8) Chris@10: Chris@10: (You might want to check the value of `iret': if it is zero, it Chris@10: indicates an unlikely error during thread initialization.) Chris@10: Chris@10: To transform a three-dimensional array in-place with C, you might do: Chris@10: Chris@10: fftw_complex arr[L][M][N]; Chris@10: fftw_plan plan; Chris@10: Chris@10: plan = fftw_plan_dft_3d(L,M,N, arr,arr, Chris@10: FFTW_FORWARD, FFTW_ESTIMATE); Chris@10: fftw_execute(plan); Chris@10: fftw_destroy_plan(plan); Chris@10: Chris@10: In Fortran, you would use this instead: Chris@10: Chris@10: double complex arr Chris@10: dimension arr(L,M,N) Chris@10: integer*8 plan Chris@10: Chris@10: call dfftw_plan_dft_3d(plan, L,M,N, arr,arr, Chris@10: & FFTW_FORWARD, FFTW_ESTIMATE) Chris@10: call dfftw_execute_dft(plan, arr, arr) Chris@10: call dfftw_destroy_plan(plan) Chris@10: Chris@10: Note that we pass the array dimensions in the "natural" order in Chris@10: both C and Fortran. Chris@10: Chris@10: To transform a one-dimensional real array in Fortran, you might do: Chris@10: Chris@10: double precision in Chris@10: dimension in(N) Chris@10: double complex out Chris@10: dimension out(N/2 + 1) Chris@10: integer*8 plan Chris@10: Chris@10: call dfftw_plan_dft_r2c_1d(plan,N,in,out,FFTW_ESTIMATE) Chris@10: call dfftw_execute_dft_r2c(plan, in, out) Chris@10: call dfftw_destroy_plan(plan) Chris@10: Chris@10: To transform a two-dimensional real array, out of place, you might Chris@10: use the following: Chris@10: Chris@10: double precision in Chris@10: dimension in(M,N) Chris@10: double complex out Chris@10: dimension out(M/2 + 1, N) Chris@10: integer*8 plan Chris@10: Chris@10: call dfftw_plan_dft_r2c_2d(plan,M,N,in,out,FFTW_ESTIMATE) Chris@10: call dfftw_execute_dft_r2c(plan, in, out) Chris@10: call dfftw_destroy_plan(plan) Chris@10: Chris@10: *Important:* Notice that it is the _first_ dimension of the complex Chris@10: output array that is cut in half in Fortran, rather than the last Chris@10: dimension as in C. This is a consequence of the interface routines Chris@10: reversing the order of the array dimensions passed to FFTW so that the Chris@10: Fortran program can use its ordinary column-major order. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Wisdom of Fortran?, Prev: Fortran Examples, Up: Calling FFTW from Legacy Fortran Chris@10: Chris@10: 8.5 Wisdom of Fortran? Chris@10: ====================== Chris@10: Chris@10: In this section, we discuss how one can import/export FFTW wisdom Chris@10: (saved plans) to/from a Fortran program; we assume that the reader is Chris@10: already familiar with wisdom, as described in *note Words of Chris@10: Wisdom-Saving Plans::. Chris@10: Chris@10: The basic problem is that is difficult to (portably) pass files and Chris@10: strings between Fortran and C, so we cannot provide a direct Fortran Chris@10: equivalent to the `fftw_export_wisdom_to_file', etcetera, functions. Chris@10: Fortran interfaces _are_ provided for the functions that do not take Chris@10: file/string arguments, however: `dfftw_import_system_wisdom', Chris@10: `dfftw_import_wisdom', `dfftw_export_wisdom', and `dfftw_forget_wisdom'. Chris@10: Chris@10: So, for example, to import the system-wide wisdom, you would do: Chris@10: Chris@10: integer isuccess Chris@10: call dfftw_import_system_wisdom(isuccess) Chris@10: Chris@10: As usual, the C return value is turned into a first parameter; Chris@10: `isuccess' is non-zero on success and zero on failure (e.g. if there is Chris@10: no system wisdom installed). Chris@10: Chris@10: If you want to import/export wisdom from/to an arbitrary file or Chris@10: elsewhere, you can employ the generic `dfftw_import_wisdom' and Chris@10: `dfftw_export_wisdom' functions, for which you must supply a subroutine Chris@10: to read/write one character at a time. The FFTW package contains an Chris@10: example file `doc/f77_wisdom.f' demonstrating how to implement Chris@10: `import_wisdom_from_file' and `export_wisdom_to_file' subroutines in Chris@10: this way. (These routines cannot be compiled into the FFTW library Chris@10: itself, lest all FFTW-using programs be required to link with the Chris@10: Fortran I/O library.) Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Upgrading from FFTW version 2, Next: Installation and Customization, Prev: Calling FFTW from Legacy Fortran, Up: Top Chris@10: Chris@10: 9 Upgrading from FFTW version 2 Chris@10: ******************************* Chris@10: Chris@10: In this chapter, we outline the process for updating codes designed for Chris@10: the older FFTW 2 interface to work with FFTW 3. The interface for FFTW Chris@10: 3 is not backwards-compatible with the interface for FFTW 2 and earlier Chris@10: versions; codes written to use those versions will fail to link with Chris@10: FFTW 3. Nor is it possible to write "compatibility wrappers" to bridge Chris@10: the gap (at least not efficiently), because FFTW 3 has different Chris@10: semantics from previous versions. However, upgrading should be a Chris@10: straightforward process because the data formats are identical and the Chris@10: overall style of planning/execution is essentially the same. Chris@10: Chris@10: Unlike FFTW 2, there are no separate header files for real and Chris@10: complex transforms (or even for different precisions) in FFTW 3; all Chris@10: interfaces are defined in the `' header file. Chris@10: Chris@10: Numeric Types Chris@10: ============= Chris@10: Chris@10: The main difference in data types is that `fftw_complex' in FFTW 2 was Chris@10: defined as a `struct' with macros `c_re' and `c_im' for accessing the Chris@10: real/imaginary parts. (This is binary-compatible with FFTW 3 on any Chris@10: machine except perhaps for some older Crays in single precision.) The Chris@10: equivalent macros for FFTW 3 are: Chris@10: Chris@10: #define c_re(c) ((c)[0]) Chris@10: #define c_im(c) ((c)[1]) Chris@10: Chris@10: This does not work if you are using the C99 complex type, however, Chris@10: unless you insert a `double*' typecast into the above macros (*note Chris@10: Complex numbers::). Chris@10: Chris@10: Also, FFTW 2 had an `fftw_real' typedef that was an alias for Chris@10: `double' (in double precision). In FFTW 3 you should just use `double' Chris@10: (or whatever precision you are employing). Chris@10: Chris@10: Plans Chris@10: ===== Chris@10: Chris@10: The major difference between FFTW 2 and FFTW 3 is in the Chris@10: planning/execution division of labor. In FFTW 2, plans were found for a Chris@10: given transform size and type, and then could be applied to _any_ Chris@10: arrays and for _any_ multiplicity/stride parameters. In FFTW 3, you Chris@10: specify the particular arrays, stride parameters, etcetera when Chris@10: creating the plan, and the plan is then executed for _those_ arrays Chris@10: (unless the guru interface is used) and _those_ parameters _only_. Chris@10: (FFTW 2 had "specific planner" routines that planned for a particular Chris@10: array and stride, but the plan could still be used for other arrays and Chris@10: strides.) That is, much of the information that was formerly specified Chris@10: at execution time is now specified at planning time. Chris@10: Chris@10: Like FFTW 2's specific planner routines, the FFTW 3 planner Chris@10: overwrites the input/output arrays unless you use `FFTW_ESTIMATE'. Chris@10: Chris@10: FFTW 2 had separate data types `fftw_plan', `fftwnd_plan', Chris@10: `rfftw_plan', and `rfftwnd_plan' for complex and real one- and Chris@10: multi-dimensional transforms, and each type had its own `destroy' Chris@10: function. In FFTW 3, all plans are of type `fftw_plan' and all are Chris@10: destroyed by `fftw_destroy_plan(plan)'. Chris@10: Chris@10: Where you formerly used `fftw_create_plan' and `fftw_one' to plan Chris@10: and compute a single 1d transform, you would now use `fftw_plan_dft_1d' Chris@10: to plan the transform. If you used the generic `fftw' function to Chris@10: execute the transform with multiplicity (`howmany') and stride Chris@10: parameters, you would now use the advanced interface Chris@10: `fftw_plan_many_dft' to specify those parameters. The plans are now Chris@10: executed with `fftw_execute(plan)', which takes all of its parameters Chris@10: (including the input/output arrays) from the plan. Chris@10: Chris@10: In-place transforms no longer interpret their output argument as Chris@10: scratch space, nor is there an `FFTW_IN_PLACE' flag. You simply pass Chris@10: the same pointer for both the input and output arguments. (Previously, Chris@10: the output `ostride' and `odist' parameters were ignored for in-place Chris@10: transforms; now, if they are specified via the advanced interface, they Chris@10: are significant even in the in-place case, although they should Chris@10: normally equal the corresponding input parameters.) Chris@10: Chris@10: The `FFTW_ESTIMATE' and `FFTW_MEASURE' flags have the same meaning Chris@10: as before, although the planning time will differ. You may also Chris@10: consider using `FFTW_PATIENT', which is like `FFTW_MEASURE' except that Chris@10: it takes more time in order to consider a wider variety of algorithms. Chris@10: Chris@10: For multi-dimensional complex DFTs, instead of `fftwnd_create_plan' Chris@10: (or `fftw2d_create_plan' or `fftw3d_create_plan'), followed by Chris@10: `fftwnd_one', you would use `fftw_plan_dft' (or `fftw_plan_dft_2d' or Chris@10: `fftw_plan_dft_3d'). followed by `fftw_execute'. If you used `fftwnd' Chris@10: to to specify strides etcetera, you would instead specify these via Chris@10: `fftw_plan_many_dft'. Chris@10: Chris@10: The analogues to `rfftw_create_plan' and `rfftw_one' with Chris@10: `FFTW_REAL_TO_COMPLEX' or `FFTW_COMPLEX_TO_REAL' directions are Chris@10: `fftw_plan_r2r_1d' with kind `FFTW_R2HC' or `FFTW_HC2R', followed by Chris@10: `fftw_execute'. The stride etcetera arguments of `rfftw' are now in Chris@10: `fftw_plan_many_r2r'. Chris@10: Chris@10: Instead of `rfftwnd_create_plan' (or `rfftw2d_create_plan' or Chris@10: `rfftw3d_create_plan') followed by `rfftwnd_one_real_to_complex' or Chris@10: `rfftwnd_one_complex_to_real', you now use `fftw_plan_dft_r2c' (or Chris@10: `fftw_plan_dft_r2c_2d' or `fftw_plan_dft_r2c_3d') or Chris@10: `fftw_plan_dft_c2r' (or `fftw_plan_dft_c2r_2d' or Chris@10: `fftw_plan_dft_c2r_3d'), respectively, followed by `fftw_execute'. As Chris@10: usual, the strides etcetera of `rfftwnd_real_to_complex' or Chris@10: `rfftwnd_complex_to_real' are no specified in the advanced planner Chris@10: routines, `fftw_plan_many_dft_r2c' or `fftw_plan_many_dft_c2r'. Chris@10: Chris@10: Wisdom Chris@10: ====== Chris@10: Chris@10: In FFTW 2, you had to supply the `FFTW_USE_WISDOM' flag in order to use Chris@10: wisdom; in FFTW 3, wisdom is always used. (You could simulate the FFTW Chris@10: 2 wisdom-less behavior by calling `fftw_forget_wisdom' after every Chris@10: planner call.) Chris@10: Chris@10: The FFTW 3 wisdom import/export routines are almost the same as Chris@10: before (although the storage format is entirely different). There is Chris@10: one significant difference, however. In FFTW 2, the import routines Chris@10: would never read past the end of the wisdom, so you could store extra Chris@10: data beyond the wisdom in the same file, for example. In FFTW 3, the Chris@10: file-import routine may read up to a few hundred bytes past the end of Chris@10: the wisdom, so you cannot store other data just beyond it.(1) Chris@10: Chris@10: Wisdom has been enhanced by additional humility in FFTW 3: whereas Chris@10: FFTW 2 would re-use wisdom for a given transform size regardless of the Chris@10: stride etc., in FFTW 3 wisdom is only used with the strides etc. for Chris@10: which it was created. Unfortunately, this means FFTW 3 has to create Chris@10: new plans from scratch more often than FFTW 2 (in FFTW 2, planning e.g. Chris@10: one transform of size 1024 also created wisdom for all smaller powers Chris@10: of 2, but this no longer occurs). Chris@10: Chris@10: FFTW 3 also has the new routine `fftw_import_system_wisdom' to Chris@10: import wisdom from a standard system-wide location. Chris@10: Chris@10: Memory allocation Chris@10: ================= Chris@10: Chris@10: In FFTW 3, we recommend allocating your arrays with `fftw_malloc' and Chris@10: deallocating them with `fftw_free'; this is not required, but allows Chris@10: optimal performance when SIMD acceleration is used. (Those two Chris@10: functions actually existed in FFTW 2, and worked the same way, but were Chris@10: not documented.) Chris@10: Chris@10: In FFTW 2, there were `fftw_malloc_hook' and `fftw_free_hook' Chris@10: functions that allowed the user to replace FFTW's memory-allocation Chris@10: routines (e.g. to implement different error-handling, since by default Chris@10: FFTW prints an error message and calls `exit' to abort the program if Chris@10: `malloc' returns `NULL'). These hooks are not supported in FFTW 3; Chris@10: those few users who require this functionality can just directly modify Chris@10: the memory-allocation routines in FFTW (they are defined in Chris@10: `kernel/alloc.c'). Chris@10: Chris@10: Fortran interface Chris@10: ================= Chris@10: Chris@10: In FFTW 2, the subroutine names were obtained by replacing `fftw_' with Chris@10: `fftw_f77'; in FFTW 3, you replace `fftw_' with `dfftw_' (or `sfftw_' Chris@10: or `lfftw_', depending upon the precision). Chris@10: Chris@10: In FFTW 3, we have begun recommending that you always declare the Chris@10: type used to store plans as `integer*8'. (Too many people didn't notice Chris@10: our instruction to switch from `integer' to `integer*8' for 64-bit Chris@10: machines.) Chris@10: Chris@10: In FFTW 3, we provide a `fftw3.f' "header file" to include in your Chris@10: code (and which is officially installed on Unix systems). (In FFTW 2, Chris@10: we supplied a `fftw_f77.i' file, but it was not installed.) Chris@10: Chris@10: Otherwise, the C-Fortran interface relationship is much the same as Chris@10: it was before (e.g. return values become initial parameters, and Chris@10: multi-dimensional arrays are in column-major order). Unlike FFTW 2, we Chris@10: do provide some support for wisdom import/export in Fortran (*note Chris@10: Wisdom of Fortran?::). Chris@10: Chris@10: Threads Chris@10: ======= Chris@10: Chris@10: Like FFTW 2, only the execution routines are thread-safe. All planner Chris@10: routines, etcetera, should be called by only a single thread at a time Chris@10: (*note Thread safety::). _Unlike_ FFTW 2, there is no special Chris@10: `FFTW_THREADSAFE' flag for the planner to allow a given plan to be Chris@10: usable by multiple threads in parallel; this is now the case by default. Chris@10: Chris@10: The multi-threaded version of FFTW 2 required you to pass the number Chris@10: of threads each time you execute the transform. The number of threads Chris@10: is now stored in the plan, and is specified before the planner is Chris@10: called by `fftw_plan_with_nthreads'. The threads initialization Chris@10: routine used to be called `fftw_threads_init' and would return zero on Chris@10: success; the new routine is called `fftw_init_threads' and returns zero Chris@10: on failure. *Note Multi-threaded FFTW::. Chris@10: Chris@10: There is no separate threads header file in FFTW 3; all the function Chris@10: prototypes are in `'. However, you still have to link to a Chris@10: separate library (`-lfftw3_threads -lfftw3 -lm' on Unix), as well as to Chris@10: the threading library (e.g. POSIX threads on Unix). Chris@10: Chris@10: ---------- Footnotes ---------- Chris@10: Chris@10: (1) We do our own buffering because GNU libc I/O routines are Chris@10: horribly slow for single-character I/O, apparently for thread-safety Chris@10: reasons (whether you are using threads or not). Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Installation and Customization, Next: Acknowledgments, Prev: Upgrading from FFTW version 2, Up: Top Chris@10: Chris@10: 10 Installation and Customization Chris@10: ********************************* Chris@10: Chris@10: This chapter describes the installation and customization of FFTW, the Chris@10: latest version of which may be downloaded from the FFTW home page Chris@10: (http://www.fftw.org). Chris@10: Chris@10: In principle, FFTW should work on any system with an ANSI C compiler Chris@10: (`gcc' is fine). However, planner time is drastically reduced if FFTW Chris@10: can exploit a hardware cycle counter; FFTW comes with cycle-counter Chris@10: support for all modern general-purpose CPUs, but you may need to add a Chris@10: couple of lines of code if your compiler is not yet supported (*note Chris@10: Cycle Counters::). (On Unix, there will be a warning at the end of the Chris@10: `configure' output if no cycle counter is found.) Chris@10: Chris@10: Installation of FFTW is simplest if you have a Unix or a GNU system, Chris@10: such as GNU/Linux, and we describe this case in the first section below, Chris@10: including the use of special configuration options to e.g. install Chris@10: different precisions or exploit optimizations for particular Chris@10: architectures (e.g. SIMD). Compilation on non-Unix systems is a more Chris@10: manual process, but we outline the procedure in the second section. It Chris@10: is also likely that pre-compiled binaries will be available for popular Chris@10: systems. Chris@10: Chris@10: Finally, we describe how you can customize FFTW for particular needs Chris@10: by generating _codelets_ for fast transforms of sizes not supported Chris@10: efficiently by the standard FFTW distribution. Chris@10: Chris@10: * Menu: Chris@10: Chris@10: * Installation on Unix:: Chris@10: * Installation on non-Unix systems:: Chris@10: * Cycle Counters:: Chris@10: * Generating your own code:: Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Installation on Unix, Next: Installation on non-Unix systems, Prev: Installation and Customization, Up: Installation and Customization Chris@10: Chris@10: 10.1 Installation on Unix Chris@10: ========================= Chris@10: Chris@10: FFTW comes with a `configure' program in the GNU style. Installation Chris@10: can be as simple as: Chris@10: Chris@10: ./configure Chris@10: make Chris@10: make install Chris@10: Chris@10: This will build the uniprocessor complex and real transform libraries Chris@10: along with the test programs. (We recommend that you use GNU `make' if Chris@10: it is available; on some systems it is called `gmake'.) The "`make Chris@10: install'" command installs the fftw and rfftw libraries in standard Chris@10: places, and typically requires root privileges (unless you specify a Chris@10: different install directory with the `--prefix' flag to `configure'). Chris@10: You can also type "`make check'" to put the FFTW test programs through Chris@10: their paces. If you have problems during configuration or compilation, Chris@10: you may want to run "`make distclean'" before trying again; this Chris@10: ensures that you don't have any stale files left over from previous Chris@10: compilation attempts. Chris@10: Chris@10: The `configure' script chooses the `gcc' compiler by default, if it Chris@10: is available; you can select some other compiler with: Chris@10: ./configure CC="" Chris@10: Chris@10: The `configure' script knows good `CFLAGS' (C compiler flags) for a Chris@10: few systems. If your system is not known, the `configure' script will Chris@10: print out a warning. In this case, you should re-configure FFTW with Chris@10: the command Chris@10: ./configure CFLAGS="" Chris@10: and then compile as usual. If you do find an optimal set of Chris@10: `CFLAGS' for your system, please let us know what they are (along with Chris@10: the output of `config.guess') so that we can include them in future Chris@10: releases. Chris@10: Chris@10: `configure' supports all the standard flags defined by the GNU Chris@10: Coding Standards; see the `INSTALL' file in FFTW or the GNU web page Chris@10: (http://www.gnu.org/prep/standards/html_node/index.html). Note Chris@10: especially `--help' to list all flags and `--enable-shared' to create Chris@10: shared, rather than static, libraries. `configure' also accepts a few Chris@10: FFTW-specific flags, particularly: Chris@10: Chris@10: * `--enable-float': Produces a single-precision version of FFTW Chris@10: (`float') instead of the default double-precision (`double'). Chris@10: *Note Precision::. Chris@10: Chris@10: * `--enable-long-double': Produces a long-double precision version of Chris@10: FFTW (`long double') instead of the default double-precision Chris@10: (`double'). The `configure' script will halt with an error Chris@10: message if `long double' is the same size as `double' on your Chris@10: machine/compiler. *Note Precision::. Chris@10: Chris@10: * `--enable-quad-precision': Produces a quadruple-precision version Chris@10: of FFTW using the nonstandard `__float128' type provided by `gcc' Chris@10: 4.6 or later on x86, x86-64, and Itanium architectures, instead of Chris@10: the default double-precision (`double'). The `configure' script Chris@10: will halt with an error message if the compiler is not `gcc' Chris@10: version 4.6 or later or if `gcc''s `libquadmath' library is not Chris@10: installed. *Note Precision::. Chris@10: Chris@10: * `--enable-threads': Enables compilation and installation of the Chris@10: FFTW threads library (*note Multi-threaded FFTW::), which provides Chris@10: a simple interface to parallel transforms for SMP systems. By Chris@10: default, the threads routines are not compiled. Chris@10: Chris@10: * `--enable-openmp': Like `--enable-threads', but using OpenMP Chris@10: compiler directives in order to induce parallelism rather than Chris@10: spawning its own threads directly, and installing an `fftw3_omp' Chris@10: library rather than an `fftw3_threads' library (*note Chris@10: Multi-threaded FFTW::). You can use both `--enable-openmp' and Chris@10: `--enable-threads' since they compile/install libraries with Chris@10: different names. By default, the OpenMP routines are not compiled. Chris@10: Chris@10: * `--with-combined-threads': By default, if `--enable-threads' is Chris@10: used, the threads support is compiled into a separate library that Chris@10: must be linked in addition to the main FFTW library. This is so Chris@10: that users of the serial library do not need to link the system Chris@10: threads libraries. If `--with-combined-threads' is specified, Chris@10: however, then no separate threads library is created, and threads Chris@10: are included in the main FFTW library. This is mainly useful Chris@10: under Windows, where no system threads library is required and Chris@10: inter-library dependencies are problematic. Chris@10: Chris@10: * `--enable-mpi': Enables compilation and installation of the FFTW Chris@10: MPI library (*note Distributed-memory FFTW with MPI::), which Chris@10: provides parallel transforms for distributed-memory systems with Chris@10: MPI. (By default, the MPI routines are not compiled.) *Note FFTW Chris@10: MPI Installation::. Chris@10: Chris@10: * `--disable-fortran': Disables inclusion of legacy-Fortran wrapper Chris@10: routines (*note Calling FFTW from Legacy Fortran::) in the standard Chris@10: FFTW libraries. These wrapper routines increase the library size Chris@10: by only a negligible amount, so they are included by default as Chris@10: long as the `configure' script finds a Fortran compiler on your Chris@10: system. (To specify a particular Fortran compiler foo, pass Chris@10: `F77='foo to `configure'.) Chris@10: Chris@10: * `--with-g77-wrappers': By default, when Fortran wrappers are Chris@10: included, the wrappers employ the linking conventions of the Chris@10: Fortran compiler detected by the `configure' script. If this Chris@10: compiler is GNU `g77', however, then _two_ versions of the Chris@10: wrappers are included: one with `g77''s idiosyncratic convention Chris@10: of appending two underscores to identifiers, and one with the more Chris@10: common convention of appending only a single underscore. This Chris@10: way, the same FFTW library will work with both `g77' and other Chris@10: Fortran compilers, such as GNU `gfortran'. However, the converse Chris@10: is not true: if you configure with a different compiler, then the Chris@10: `g77'-compatible wrappers are not included. By specifying Chris@10: `--with-g77-wrappers', the `g77'-compatible wrappers are included Chris@10: in addition to wrappers for whatever Fortran compiler `configure' Chris@10: finds. Chris@10: Chris@10: * `--with-slow-timer': Disables the use of hardware cycle counters, Chris@10: and falls back on `gettimeofday' or `clock'. This greatly worsens Chris@10: performance, and should generally not be used (unless you don't Chris@10: have a cycle counter but still really want an optimized plan Chris@10: regardless of the time). *Note Cycle Counters::. Chris@10: Chris@10: * `--enable-sse', `--enable-sse2', `--enable-avx', Chris@10: `--enable-altivec', `--enable-neon': Enable the compilation of Chris@10: SIMD code for SSE (Pentium III+), SSE2 (Pentium IV+), AVX (Sandy Chris@10: Bridge, Interlagos), AltiVec (PowerPC G4+), NEON (some ARM Chris@10: processors). SSE, AltiVec, and NEON only work with Chris@10: `--enable-float' (above). SSE2 works in both single and double Chris@10: precision (and is simply SSE in single precision). The resulting Chris@10: code will _still work_ on earlier CPUs lacking the SIMD extensions Chris@10: (SIMD is automatically disabled, although the FFTW library is Chris@10: still larger). Chris@10: - These options require a compiler supporting SIMD extensions, Chris@10: and compiler support is always a bit flaky: see the FFTW FAQ Chris@10: for a list of compiler versions that have problems compiling Chris@10: FFTW. Chris@10: Chris@10: - With AltiVec and `gcc', you may have to use the Chris@10: `-mabi=altivec' option when compiling any code that links to Chris@10: FFTW, in order to properly align the stack; otherwise, FFTW Chris@10: could crash when it tries to use an AltiVec feature. (This Chris@10: is not necessary on MacOS X.) Chris@10: Chris@10: - With SSE/SSE2 and `gcc', you should use a version of gcc that Chris@10: properly aligns the stack when compiling any code that links Chris@10: to FFTW. By default, `gcc' 2.95 and later versions align the Chris@10: stack as needed, but you should not compile FFTW with the Chris@10: `-Os' option or the `-mpreferred-stack-boundary' option with Chris@10: an argument less than 4. Chris@10: Chris@10: - Because of the large variety of ARM processors and ABIs, FFTW Chris@10: does not attempt to guess the correct `gcc' flags for Chris@10: generating NEON code. In general, you will have to provide Chris@10: them on the command line. This command line is known to have Chris@10: worked at least once: Chris@10: ./configure --with-slow-timer --host=arm-linux-gnueabi \ Chris@10: --enable-single --enable-neon \ Chris@10: "CC=arm-linux-gnueabi-gcc -march=armv7-a -mfloat-abi=softfp" Chris@10: Chris@10: Chris@10: To force `configure' to use a particular C compiler foo (instead of Chris@10: the default, usually `gcc'), pass `CC='foo to the `configure' script; Chris@10: you may also need to set the flags via the variable `CFLAGS' as Chris@10: described above. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Installation on non-Unix systems, Next: Cycle Counters, Prev: Installation on Unix, Up: Installation and Customization Chris@10: Chris@10: 10.2 Installation on non-Unix systems Chris@10: ===================================== Chris@10: Chris@10: It should be relatively straightforward to compile FFTW even on non-Unix Chris@10: systems lacking the niceties of a `configure' script. Basically, you Chris@10: need to edit the `config.h' header (copy it from `config.h.in') to Chris@10: `#define' the various options and compiler characteristics, and then Chris@10: compile all the `.c' files in the relevant directories. Chris@10: Chris@10: The `config.h' header contains about 100 options to set, each one Chris@10: initially an `#undef', each documented with a comment, and most of them Chris@10: fairly obvious. For most of the options, you should simply `#define' Chris@10: them to `1' if they are applicable, although a few options require a Chris@10: particular value (e.g. `SIZEOF_LONG_LONG' should be defined to the size Chris@10: of the `long long' type, in bytes, or zero if it is not supported). We Chris@10: will likely post some sample `config.h' files for various operating Chris@10: systems and compilers for you to use (at least as a starting point). Chris@10: Please let us know if you have to hand-create a configuration file Chris@10: (and/or a pre-compiled binary) that you want to share. Chris@10: Chris@10: To create the FFTW library, you will then need to compile all of the Chris@10: `.c' files in the `kernel', `dft', `dft/scalar', `dft/scalar/codelets', Chris@10: `rdft', `rdft/scalar', `rdft/scalar/r2cf', `rdft/scalar/r2cb', Chris@10: `rdft/scalar/r2r', `reodft', and `api' directories. If you are Chris@10: compiling with SIMD support (e.g. you defined `HAVE_SSE2' in Chris@10: `config.h'), then you also need to compile the `.c' files in the Chris@10: `simd-support', `{dft,rdft}/simd', `{dft,rdft}/simd/*' directories. Chris@10: Chris@10: Once these files are all compiled, link them into a library, or a Chris@10: shared library, or directly into your program. Chris@10: Chris@10: To compile the FFTW test program, additionally compile the code in Chris@10: the `libbench2/' directory, and link it into a library. Then compile Chris@10: the code in the `tests/' directory and link it to the `libbench2' and Chris@10: FFTW libraries. To compile the `fftw-wisdom' (command-line) tool Chris@10: (*note Wisdom Utilities::), compile `tools/fftw-wisdom.c' and link it Chris@10: to the `libbench2' and FFTW libraries Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Cycle Counters, Next: Generating your own code, Prev: Installation on non-Unix systems, Up: Installation and Customization Chris@10: Chris@10: 10.3 Cycle Counters Chris@10: =================== Chris@10: Chris@10: FFTW's planner actually executes and times different possible FFT Chris@10: algorithms in order to pick the fastest plan for a given n. In order Chris@10: to do this in as short a time as possible, however, the timer must have Chris@10: a very high resolution, and to accomplish this we employ the hardware Chris@10: "cycle counters" that are available on most CPUs. Currently, FFTW Chris@10: supports the cycle counters on x86, PowerPC/POWER, Alpha, UltraSPARC Chris@10: (SPARC v9), IA64, PA-RISC, and MIPS processors. Chris@10: Chris@10: Access to the cycle counters, unfortunately, is a compiler and/or Chris@10: operating-system dependent task, often requiring inline assembly Chris@10: language, and it may be that your compiler is not supported. If you are Chris@10: _not_ supported, FFTW will by default fall back on its estimator Chris@10: (effectively using `FFTW_ESTIMATE' for all plans). Chris@10: Chris@10: You can add support by editing the file `kernel/cycle.h'; normally, Chris@10: this will involve adapting one of the examples already present in order Chris@10: to use the inline-assembler syntax for your C compiler, and will only Chris@10: require a couple of lines of code. Anyone adding support for a new Chris@10: system to `cycle.h' is encouraged to email us at . Chris@10: Chris@10: If a cycle counter is not available on your system (e.g. some Chris@10: embedded processor), and you don't want to use estimated plans, as a Chris@10: last resort you can use the `--with-slow-timer' option to `configure' Chris@10: (on Unix) or `#define WITH_SLOW_TIMER' in `config.h' (elsewhere). This Chris@10: will use the much lower-resolution `gettimeofday' function, or even Chris@10: `clock' if the former is unavailable, and planning will be extremely Chris@10: slow. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Generating your own code, Prev: Cycle Counters, Up: Installation and Customization Chris@10: Chris@10: 10.4 Generating your own code Chris@10: ============================= Chris@10: Chris@10: The directory `genfft' contains the programs that were used to generate Chris@10: FFTW's "codelets," which are hard-coded transforms of small sizes. We Chris@10: do not expect casual users to employ the generator, which is a rather Chris@10: sophisticated program that generates directed acyclic graphs of FFT Chris@10: algorithms and performs algebraic simplifications on them. It was Chris@10: written in Objective Caml, a dialect of ML, which is available at Chris@10: `http://caml.inria.fr/ocaml/index.en.html'. Chris@10: Chris@10: If you have Objective Caml installed (along with recent versions of Chris@10: GNU `autoconf', `automake', and `libtool'), then you can change the set Chris@10: of codelets that are generated or play with the generation options. Chris@10: The set of generated codelets is specified by the Chris@10: `{dft,rdft}/{codelets,simd}/*/Makefile.am' files. For example, you can Chris@10: add efficient REDFT codelets of small sizes by modifying Chris@10: `rdft/codelets/r2r/Makefile.am'. After you modify any `Makefile.am' Chris@10: files, you can type `sh bootstrap.sh' in the top-level directory Chris@10: followed by `make' to re-generate the files. Chris@10: Chris@10: We do not provide more details about the code-generation process, Chris@10: since we do not expect that most users will need to generate their own Chris@10: code. However, feel free to contact us at if you are Chris@10: interested in the subject. Chris@10: Chris@10: You might find it interesting to learn Caml and/or some modern Chris@10: programming techniques that we used in the generator (including monadic Chris@10: programming), especially if you heard the rumor that Java and Chris@10: object-oriented programming are the latest advancement in the field. Chris@10: The internal operation of the codelet generator is described in the Chris@10: paper, "A Fast Fourier Transform Compiler," by M. Frigo, which is Chris@10: available from the FFTW home page (http://www.fftw.org) and also Chris@10: appeared in the `Proceedings of the 1999 ACM SIGPLAN Conference on Chris@10: Programming Language Design and Implementation (PLDI)'. Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: Acknowledgments, Next: License and Copyright, Prev: Installation and Customization, Up: Top Chris@10: Chris@10: 11 Acknowledgments Chris@10: ****************** Chris@10: Chris@10: Matteo Frigo was supported in part by the Special Research Program SFB Chris@10: F011 "AURORA" of the Austrian Science Fund FWF and by MIT Lincoln Chris@10: Laboratory. For previous versions of FFTW, he was supported in part by Chris@10: the Defense Advanced Research Projects Agency (DARPA), under Grants Chris@10: N00014-94-1-0985 and F30602-97-1-0270, and by a Digital Equipment Chris@10: Corporation Fellowship. Chris@10: Chris@10: Steven G. Johnson was supported in part by a Dept. of Defense NDSEG Chris@10: Fellowship, an MIT Karl Taylor Compton Fellowship, and by the Materials Chris@10: Research Science and Engineering Center program of the National Science Chris@10: Foundation under award DMR-9400334. Chris@10: Chris@10: Code for the Cell Broadband Engine was graciously donated to the FFTW Chris@10: project by the IBM Austin Research Lab and included in fftw-3.2. (This Chris@10: code was removed in fftw-3.3.) Chris@10: Chris@10: Code for the MIPS paired-single SIMD support was graciously donated Chris@10: to the FFTW project by CodeSourcery, Inc. Chris@10: Chris@10: We are grateful to Sun Microsystems Inc. for its donation of a Chris@10: cluster of 9 8-processor Ultra HPC 5000 SMPs (24 Gflops peak). These Chris@10: machines served as the primary platform for the development of early Chris@10: versions of FFTW. Chris@10: Chris@10: We thank Intel Corporation for donating a four-processor Pentium Pro Chris@10: machine. We thank the GNU/Linux community for giving us a decent OS to Chris@10: run on that machine. Chris@10: Chris@10: We are thankful to the AMD corporation for donating an AMD Athlon XP Chris@10: 1700+ computer to the FFTW project. Chris@10: Chris@10: We thank the Compaq/HP testdrive program and VA Software Corporation Chris@10: (SourceForge.net) for providing remote access to machines that were used Chris@10: to test FFTW. Chris@10: Chris@10: The `genfft' suite of code generators was written using Objective Chris@10: Caml, a dialect of ML. Objective Caml is a small and elegant language Chris@10: developed by Xavier Leroy. The implementation is available from Chris@10: `http://caml.inria.fr/' (http://caml.inria.fr/). In previous releases Chris@10: of FFTW, `genfft' was written in Caml Light, by the same authors. An Chris@10: even earlier implementation of `genfft' was written in Scheme, but Caml Chris@10: is definitely better for this kind of application. Chris@10: Chris@10: FFTW uses many tools from the GNU project, including `automake', Chris@10: `texinfo', and `libtool'. Chris@10: Chris@10: Prof. Charles E. Leiserson of MIT provided continuous support and Chris@10: encouragement. This program would not exist without him. Charles also Chris@10: proposed the name "codelets" for the basic FFT blocks. Chris@10: Chris@10: Prof. John D. Joannopoulos of MIT demonstrated continuing tolerance Chris@10: of Steven's "extra-curricular" computer-science activities, as well as Chris@10: remarkable creativity in working them into his grant proposals. Chris@10: Steven's physics degree would not exist without him. Chris@10: Chris@10: Franz Franchetti wrote SIMD extensions to FFTW 2, which eventually Chris@10: led to the SIMD support in FFTW 3. Chris@10: Chris@10: Stefan Kral wrote most of the K7 code generator distributed with FFTW Chris@10: 3.0.x and 3.1.x. Chris@10: Chris@10: Andrew Sterian contributed the Windows timing code in FFTW 2. Chris@10: Chris@10: Didier Miras reported a bug in the test procedure used in FFTW 1.2. Chris@10: We now use a completely different test algorithm by Funda Ergun that Chris@10: does not require a separate FFT program to compare against. Chris@10: Chris@10: Wolfgang Reimer contributed the Pentium cycle counter and a few fixes Chris@10: that help portability. Chris@10: Chris@10: Ming-Chang Liu uncovered a well-hidden bug in the complex transforms Chris@10: of FFTW 2.0 and supplied a patch to correct it. Chris@10: Chris@10: The FFTW FAQ was written in `bfnn' (Bizarre Format With No Name) and Chris@10: formatted using the tools developed by Ian Jackson for the Linux FAQ. Chris@10: Chris@10: _We are especially thankful to all of our users for their continuing Chris@10: support, feedback, and interest during our development of FFTW._ Chris@10: Chris@10:  Chris@10: File: fftw3.info, Node: License and Copyright, Next: Concept Index, Prev: Acknowledgments, Up: Top Chris@10: Chris@10: 12 License and Copyright Chris@10: ************************ Chris@10: Chris@10: FFTW is Copyright (C) 2003, 2007-11 Matteo Frigo, Copyright (C) 2003, Chris@10: 2007-11 Massachusetts Institute of Technology. Chris@10: Chris@10: FFTW is free software; you can redistribute it and/or modify it Chris@10: under the terms of the GNU General Public License as published by the Chris@10: Free Software Foundation; either version 2 of the License, or (at your Chris@10: option) any later version. Chris@10: Chris@10: This program is distributed in the hope that it will be useful, but Chris@10: WITHOUT ANY WARRANTY; without even the implied warranty of Chris@10: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Chris@10: General Public License for more details. Chris@10: Chris@10: You should have received a copy of the GNU General Public License Chris@10: along with this program; if not, write to the Free Software Foundation, Chris@10: Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA You Chris@10: can also find the GPL on the GNU web site Chris@10: (http://www.gnu.org/licenses/gpl-2.0.html). Chris@10: Chris@10: In addition, we kindly ask you to acknowledge FFTW and its authors in Chris@10: any program or publication in which you use FFTW. (You are not Chris@10: _required_ to do so; it is up to your common sense to decide whether Chris@10: you want to comply with this request or not.) For general Chris@10: publications, we suggest referencing: Matteo Frigo and Steven G. Chris@10: Johnson, "The design and implementation of FFTW3," Proc. IEEE 93 (2), Chris@10: 216-231 (2005). Chris@10: Chris@10: Non-free versions of FFTW are available under terms different from Chris@10: those of the General Public License. (e.g. they do not require you to Chris@10: accompany any object code using FFTW with the corresponding source Chris@10: code.) For these alternative terms you must purchase a license from Chris@10: MIT's Technology Licensing Office. Users interested in such a license Chris@10: should contact us () for more information. Chris@10: