Search code examples
dataframeinstallationgpugoogle-colaboratorycudf

How to install cuDF on google colab with GPU Tesla K80?


I am trying to install cuDF on Google Colab for hours. One of the requirements I should install cuDF with GPU Tesla T4. While google colab gives me every time GPU Tesla K80 and I cannot install cuDF. I tried this snippet of code to check what type of GPU I have every time:

import pynvml

pynvml.nvmlInit()
handle = pynvml.nvmlDeviceGetHandleByIndex(0)
device_name = pynvml.nvmlDeviceGetName(handle)

if device_name != b'Tesla T4':
  raise Exception("""
    Unfortunately this instance does not have a T4 GPU.
    
    Please make sure you've configured Colab to request a GPU instance type.
    
    Sometimes Colab allocates a Tesla K80 instead of a T4. Resetting the instance.

    If you get a K80 GPU, try Runtime -> Reset all runtimes...
  """)
else:
  print('Woo! You got the right kind of GPU!') 

It is too frustrating to get specific type of GPU by google colab because it is kind of a luck. I am asking here to see if someone experienced the same issue, and how was it solved?


Solution

  • The K80 use Kepler GPU architecture, which is not supported by RAPIDS. Colab itself no longer can run the latest versions of RAPIDS. You can try SageMaker Studio Lab for your Try it Now experience. https://github.com/rapidsai-community/rapids-smsl.