Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: FFTW 3.3.5: Multi-dimensional MPI DFTs of Real Data Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: Chris@42:
Chris@42:

Chris@42: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

Chris@42:
Chris@42:
Chris@42: Chris@42:

6.5 Multi-dimensional MPI DFTs of Real Data

Chris@42: Chris@42:

FFTW’s MPI interface also supports multi-dimensional DFTs of real Chris@42: data, similar to the serial r2c and c2r interfaces. (Parallel Chris@42: one-dimensional real-data DFTs are not currently supported; you must Chris@42: use a complex transform and set the imaginary parts of the inputs to Chris@42: zero.) Chris@42:

Chris@42:

The key points to understand for r2c and c2r MPI transforms (compared Chris@42: to the MPI complex DFTs or the serial r2c/c2r transforms), are: Chris@42:

Chris@42: Chris@42: Chris@42:

For example suppose we are performing an out-of-place r2c transform of Chris@42: L × M × N real data [padded to L × M × 2(N/2+1)], Chris@42: resulting in L × M × N/2+1 complex data. Similar to the Chris@42: example in 2d MPI example, we might do something like: Chris@42:

Chris@42:
Chris@42:
#include <fftw3-mpi.h>
Chris@42: 
Chris@42: int main(int argc, char **argv)
Chris@42: {
Chris@42:     const ptrdiff_t L = ..., M = ..., N = ...;
Chris@42:     fftw_plan plan;
Chris@42:     double *rin;
Chris@42:     fftw_complex *cout;
Chris@42:     ptrdiff_t alloc_local, local_n0, local_0_start, i, j, k;
Chris@42: 
Chris@42:     MPI_Init(&argc, &argv);
Chris@42:     fftw_mpi_init();
Chris@42: 
Chris@42:     /* get local data size and allocate */
Chris@42:     alloc_local = fftw_mpi_local_size_3d(L, M, N/2+1, MPI_COMM_WORLD,
Chris@42:                                          &local_n0, &local_0_start);
Chris@42:     rin = fftw_alloc_real(2 * alloc_local);
Chris@42:     cout = fftw_alloc_complex(alloc_local);
Chris@42: 
Chris@42:     /* create plan for out-of-place r2c DFT */
Chris@42:     plan = fftw_mpi_plan_dft_r2c_3d(L, M, N, rin, cout, MPI_COMM_WORLD,
Chris@42:                                     FFTW_MEASURE);
Chris@42: 
Chris@42:     /* initialize rin to some function my_func(x,y,z) */
Chris@42:     for (i = 0; i < local_n0; ++i)
Chris@42:        for (j = 0; j < M; ++j)
Chris@42:          for (k = 0; k < N; ++k)
Chris@42:        rin[(i*M + j) * (2*(N/2+1)) + k] = my_func(local_0_start+i, j, k);
Chris@42: 
Chris@42:     /* compute transforms as many times as desired */
Chris@42:     fftw_execute(plan);
Chris@42: 
Chris@42:     fftw_destroy_plan(plan);
Chris@42: 
Chris@42:     MPI_Finalize();
Chris@42: }
Chris@42: 
Chris@42: Chris@42: Chris@42: Chris@42:

Note that we allocated rin using fftw_alloc_real with an Chris@42: argument of 2 * alloc_local: since alloc_local is the Chris@42: number of complex values to allocate, the number of real Chris@42: values is twice as many. The rin array is then Chris@42: local_n0 × M × 2(N/2+1) in row-major order, so its Chris@42: (i,j,k) element is at the index (i*M + j) * (2*(N/2+1)) + Chris@42: k (see Multi-dimensional Array Format). Chris@42:

Chris@42: Chris@42: Chris@42: Chris@42:

As for the complex transforms, improved performance can be obtained by Chris@42: specifying that the output is the transpose of the input or vice versa Chris@42: (see Transposed distributions). In our L × M × N r2c Chris@42: example, including FFTW_TRANSPOSED_OUT in the flags means that Chris@42: the input would be a padded L × M × 2(N/2+1) real array Chris@42: distributed over the L dimension, while the output would be a Chris@42: M × L × N/2+1 complex array distributed over the M Chris@42: dimension. To perform the inverse c2r transform with the same data Chris@42: distributions, you would use the FFTW_TRANSPOSED_IN flag. Chris@42:

Chris@42:
Chris@42:
Chris@42:

Chris@42: Next: , Previous: , Up: Distributed-memory FFTW with MPI   [Contents][Index]

Chris@42:
Chris@42: Chris@42: Chris@42: Chris@42: Chris@42: