I followed the instructions on the Chainer doc, which led me to an error when I ran my code:
RuntimeErrorTraceback (most recent call last)
<ipython-input-9-ffb21f9880f0> in <module>()
...
6 model = Classifier(CompetitionNetwork(n_units = 64))
----> 7 model.to_gpu()
...
RuntimeError: CUDA environment is not correctly set up
(see https://github.com/chainer/chainer#installation).No module named cupy
Then I tried installing cupy in many different ways, one of them being
!apt -y install libcusparse8.0 libnvrtc8.0 libnvtoolsext1
!ln -snf /usr/lib/x86_64-linux-gnu/libnvrtc-builtins.so.8.0 /usr/lib/x86_64-linux-gnu/libnvrtc-builtins.so
!pip install cupy-cuda80 chainer
which keep giving me the same error after importing cupy and then running my code:
RuntimeError: CUDA environment is not correctly set up (see
https://github.com/chainer/chainer#installation).No module named cupy
Next I tried installing cuda using this:
!wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb
!dpkg -i cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb
!apt-key add /var/cuda-repo-<version>/7fa2af80.pub
!apt-get update
!apt-get install cuda
Which took a very long time, and seemed to work however in the end still gave me the same error.
It seems that it is very difficult to use Chainer on Google Colab's GPU, unless I am doing something wrong. With Tensorflow it is much easier. Does anyone have experience with using Chainer on Google's GPU?
You may want to look at this Chainer Example.
https://colab.research.google.com/drive/1SsxHvQdSz23kaVov8yKizVD3_2tkXdZM