Search code examples
mpicluster-computingfftfftw

Reusing FFTW wisdom on clusters


I'm running distributed MPI programs on clusters using multiple nodes, where I make use of the MPI FFT's of FFTW. To save time I reuse wisdom from one run to the next. To generate this wisdom, FFTW experiments with a lot of different algorithms and what not for the given problem. I am worried that because I am working on a cluster, the best solution stored as wisdom for one set of CPUs/nodes may not be the best solution for some other set of CPUs/nodes performing the same task, and so I should not reuse wisdom unless I am running on exactly the same CPUs/nodes as the run where the wisdom was gathered.

Is this correct, or is the wisdom somehow completely indifferent to the physical hardware on which it is generated?


Solution

  • If your cluster is homogeneous, the saved fftw plans likely make sense, though the the way the processes are connected may affect optimal plans for mpi-related operations. But if your cluster is not homogeneous, saving the fftw plan can be suboptimal and problem related to load balance could proove hard to solve.

    Taking a look at wisdom files produced by fftw and fftw_mpi for a 2D c2c transform, I can see additionnal lines likely related to phases like transposition where mpi communications are required, such as:

    (fftw_mpi_transpose_pairwise_register 0 #x1040 #x1040 #x0 #x394c59f5 #xf7d5729e #xe8cf4383 #xce624769)
    

    Indeed, there are different algorithms for transposing the 2D (or 3D) array: in the folder mpi of the source of fftw, files transpose-pairwise.c, transpose-alltoall.c and transpose-recurse.c implement these algorithms. As flags FFTW_MEASURE or FFTW_EXHAUSTIVE are set, these algorithms are run to select the fastest, as stated here. The result might depend on the topology of the network of processes (how many processes on each node? How these nodes are connected?). If the optimal plan depends on where the processes are running and on the network topology, using the wisdom utility will not be decisive. Otherwise, using the wisdom feature can save some time as the plan is built.

    To test whether the optimal plan changed, you can perform a couple of runs and save the resulting plan in files: a reproductibility test!

    int rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
    if (rank == 0) fftw_export_wisdom_to_filename("wisdommpi.txt");
    
    /* save the plan on each process ! Depending on the file system of the cluster, performing communications can be required */
    char filename[42];
    sprintf(filename, "wisdom%d.txt",rank);
    fftw_export_wisdom_to_filename(filename);
    

    Finally, to compare the produced wisdom files, try in a bash script:

    for filename in wis*.txt; do
      for filename2 in wis*.txt; do
        echo "."
        if grep -Fqvf "$filename" "$filename2"; then
            echo "$filename"
            echo "$filename2"
                echo $"There are lines in file1 that don’t occur in file2."
        fi
      done
    done
    

    This script check that all lines in files are also present in the other files, following Check if all lines from one file are present somewhere in another file On my personal computer, using mpirun -np 4 main, all wisdom files are identical except for a permutation of lines.

    If the files are different from one run to another, it could be attributed to the communication pattern between processes... or sequential performance of dft for each process. The piece of code above save the optimal plan for each process. If lines related to sequential operations, without fftw_mpi in it, such as:

      (fftw_codelet_n1fv_10_sse2 0 #x1440 #x1440 #x0 #xa9be7eee #x53354c26 #xc32b0044 #xb92f3bfd)
    

    become different, it is a clue that the optimal sequential algorithm changes from one process to the other. In this case, the wall clock time of the sequential operations may also differ from one process to another. Hence, checking the load balance between processes could be instructive. As noticed in the documentation of FFTW about load balance:

    Load balancing is especially difficult when you are parallelizing over heterogeneous machines; ... FFTW does not deal with this problem, however—it assumes that your processes run on hardware of comparable speed, and that the goal is therefore to divide the problem as equally as possible.

    This assumption is consistent with the operation performed by fftw_mpi_gather_wisdom();

    (If the plans created for the same problem by different processes are not the same, fftw_mpi_gather_wisdom will arbitrarily choose one of the plans.) Both of these functions may result in suboptimal plans for different processes if the processes are running on non-identical hardware...

    The transpose operation in 2D and 3D fft requires a lot a communications: one of the implementation is a call to MPI_Alltoall involving almost the whole array. Hence, a good connectivity between nodes (infiniband...) can proove useful.

    Let us know if you found different optimal plans from one run to another and how these plans differ!