I have optimized some python code using the decorator @jit from numba library. However, I want to indicate to @jit to use my GPU device explicitly. From: Difference between @cuda.jit and @jit(target='gpu'), I understand that I need to use @jit(target="cuda") to do it.
I tried to do it by doing something like this:
from numba import jit, cuda
@jit(target='cuda') # The code runs normally without (target='cuda')
def function(args):
# some code
And I got the following error:
KeyError: "Unrecognized options: {'target'}. Known options are dict_keys(['_nrt', 'boundscheck', 'debug', 'error_model', 'fastmath', 'forceinline', 'forceobj', 'inline', 'looplift', 'no_cfunc_wrapper', 'no_cpython_wrapper', 'no_rewrites', 'nogil', 'nopython', 'parallel', 'target_backend'])"
I have read this: How to run numba.jit decorated function on GPU? but the solution did not work.
I would appreciate some help to make @jit(target='cuda') work without rewriting the code using @cuda.jit as this last one is for writing CUDA kernel in Python and compile and run it.
Many thanks in advance!
AFAIK, CUDA targets are not supported anymore for jit
andnjit
(it was supported few years ago as a wrapper to cuda.jit
). It is not documented. There is however such parameter for numba.vectorize
and numba.guvectorize
. In the Numba code, one can see that there is a parameter called target_backend
which is obviously not used (anymore?). There is a parameter call _target
which is read but not meant to be used directly by end-users. Additionally, it calls cuda.jit
in the end anyway. Such part of the code seems to be dead.
If you want to write a GPU-based code, then please use numba.vectorize
and numba.guvectorize
or cuda.jit
(the last is quite low-level compared to the two first).