cufftGetSize*() is not supposed to allocate any memory, and it doesn't (I checked available memory before and after calling cufftGetSize*). Does it return CUFFT_ALLOC_FAILED if a later allocation would fail?
Example code:
#include <iostream>
#include <stdio.h>
#include <cuda.h>
#include <cufft.h>
int main() {
for (int N=1; N<1800; ++N) {
std::cerr << "N = "<< N << " ";
cufftResult r;
cufftHandle planR2C;
cudaDeviceReset();
r = cufftCreate(&planR2C);
if(r) return 1;
r = cufftSetCompatibilityMode(planR2C, CUFFT_COMPATIBILITY_FFTW_PADDING);
if(r) return 1;
r = cufftSetAutoAllocation(planR2C, 0);
if(r) return 1;
size_t workSize;
r = cufftGetSize3d(planR2C, 1800, 1800, N, CUFFT_R2C, &workSize);
if(r==CUFFT_ALLOC_FAILED) std::cerr << "CUFFT_ALLOC_FAILED\n";
std::cerr << " Estimated workSize: "
<< workSize / ( 1024 * 1024 )
<< " MB" << std::endl;
cudaDeviceReset();
}
std::cerr << "****** Done.\n";
return 0;
}
On a GPU with 4693 MB free memory at the start of the process, above code produces the following output:
N = 1 Estimated workSize: 197 MB
N = 2 Estimated workSize: 395 MB
...
N = 15 Estimated workSize: 791 MB
N = 16 Estimated workSize: 197 MB
N = 17 CUFFT_ALLOC_FAILED
N = 18 Estimated workSize: 222 MB
...
From N=73 on all odd N fail and even N pass. From N=166 all N fail.
Since required memory would not grow linearly with N, I assume (!) that the answer to my question indeed is: "it return[s] CUFFT_ALLOC_FAILED if a later allocation would fail" Although, a prove of that statement would be nice.
(My problem arises under CUDA 5.5.22, I have not checked any other version)
To mark this question answered:
Confidence among readers is high that "CUFFT_ALLOC_FAILED return value when calling cufftGetSize*()" actually means "CUFFT_ALLOC_WOULD_FAIL".