Search code examples
pythonjitnumba

Error when using numba and jit to run python with my gpu


This code is from geeksforgeeks and is meant to run normally (with a lower time for the gpu):

from numba import jit, cuda, errors
import numpy as np
# to measure exec time
from timeit import default_timer as timer   

  
# normal function to run on cpu
def func(a):                                
    for i in range(10000000):
        a[i]+= 1      
  
# function optimized to run on gpu 
@jit(target ="cuda")                         
def func2(a):
    for i in range(10000000):
        a[i]+= 1
if __name__=="__main__":
    n = 10000000                            
    a = np.ones(n, dtype = np.float64)
    b = np.ones(n, dtype = np.float32)
      
    start = timer()
    func(a)
    print("without GPU:", timer()-start)    
      
    start = timer()
    func2(a)
    print("with GPU:", timer()-start)

but i get an error on the 'def func2(a)' line saying:

__init__() got an unexpected keyword argument 'locals'

and in the terminal the error is:

C:\Users\user\AppData\Local\Programs\Python\Python38\lib\site-packages\numba\core\decorators.py:153: NumbaDeprecationWarning: The 'target' keyword argument is deprecated.
  warnings.warn("The 'target' keyword argument is deprecated.", NumbaDeprecationWarning)

Why does this happen and how do i fix it?

I have an intel i7 10750H and a 1650ti gpu


Solution

  • To get rid of the deprecation warning: https://numba.pydata.org/numba-doc/dev/reference/deprecation.html#deprecation-of-the-target-kwarg

    This is a hack but try updating your CUDA and driver version first, then retry the code. Then finally try this hack if nothing works:

    from numba import cuda
    import code
    code.interact(local=locals)
    
    # function optimized to run on gpu 
    @cuda.jit(target ="cuda")                         
    def func2(a):
        for i in range(10000000):
            a[i]+= 1