Search code examples
pythongpugoogle-colaboratory

How do I get my script in python to use the GPU on google colab?


I know how to activate the GPU in the runtime type, but I'm used to doing machine learning with sklearn or XGBoost which automatically make use of the GPU. Now I've made my own machine learning algorithm but I don't know how to force it do the computations on the GPU. I need the extra RAM from the GPU runtime type, but I don't know how to benefit from the speed of the GPU...

@jit(target ="cuda")
popsize = 1000
  
File "<ipython-input-82-7cb543a75250>", line 2
    popsize = 1000
          ^
SyntaxError: invalid syntax

Solution

  • As you can see here Numba and Jit are ways to put your scripts on GPU like follows:

    from numba import jit, cuda 
    import numpy as np 
    # to measure exec time 
    from timeit import default_timer as timer 
    
    # normal function to run on cpu 
    def func(a):                                 
        for i in range(10000000): 
            a[i]+= 1    
    
    # function optimized to run on gpu 
    @jit(target ="cuda")                         
    def func2(a): 
        for i in range(10000000): 
            a[i]+= 1
    if __name__=="__main__": 
        n = 10000000                            
        a = np.ones(n, dtype = np.float64) 
        b = np.ones(n, dtype = np.float32) 
        
        start = timer() 
        func(a) 
        print("without GPU:", timer()-start)     
        
        start = timer() 
        func2(a) 
        print("with GPU:", timer()-start) 
    

    There is one more reference link you can utilize