Is it possible to call __host__
functions in pyCUDA
like you can __global__
functions? I noticed in the documentation that pycuda.driver.Function
creates a handle to a __global__
function. __device__
functions can be called from a __global__
function, but __host__
code cannot. I'm aware that using a __host__
function pretty much defeats the purpose of pyCUDA
, but there are some already made functions that I'd like to import and call as a proof of concept.
As a note, whenever I try to import the __host__
function, I get:
pycuda._driver.LogicError: cuModuleGetFunction failed: named symbol not found
No it is not possible.
This isn't a limitation of PyCUDA, per se, but of CUDA itself. The __host__
decorator just decays away to plain host code, and the CUDA APIs don't and cannot handle them in the same way that device code can be handled (note the the APIs also don't handle __device__
either, which is the true equivalent of __host__
).
If you want to call/use __host__
functions from Python, you will need to use one of the standard C++/Python interoperability mechanisms, like ctypes or SWIG or boost python, etc.
EDIT:
Since this answer was written five years ago, CUDA has added the ability to run host functions in CUDA streams via cuLaunchHostFunc (driver API) or cudaLaunchHostFunc. Unfortunately, at the time of this edit (June 2022), PyCUDA doesn't expose this functionality, so it still isn't possible in PyCUDA and the core message of the original answer is unchanged.