Search code examples
pytorchgputensorflow2.0

Why GPU works with `torch` but not with `tensorflow`


I'm trying to employ my GPU card into Jupyter Notebook, and I got stuck with TensorFlow. But I have succeeded with torch.

I have the following setup:


(myen2v) C:\Users\Jan>conda list cudnn
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
cudnn                     8.9.2.26               cuda11_0    anaconda

(myen2v) C:\Users\Jan>conda list cuda
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
cudatoolkit               11.8.0               hd77b12b_0

(myen2v) C:\Users\Jan>conda list torch
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
pytorch                   2.0.1           cpu_py38hb0bdfb8_0
torch                     2.1.0                    pypi_0    pypi

(myen2v) C:\Users\Jan>conda list tensor
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
tensorboard               2.13.0                   pypi_0    pypi
tensorboard-data-server   0.7.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1            py38haa95532_0
tensorflow                2.13.0                   pypi_0    pypi
tensorflow-base           2.3.0           eigen_py38h75a453f_0
tensorflow-estimator      2.13.0                   pypi_0    pypi
tensorflow-gpu            2.3.0                    pypi_0    pypi
tensorflow-gpu-estimator  2.3.0                    pypi_0    pypi
tensorflow-io-gcs-filesystem 0.31.0                   pypi_0    pypi

I can run this code:


# Create tensors on GPU
a = torch.tensor([1, 2, 3], device="cuda")
b = torch.tensor([4, 5, 6], device="cuda")

# Perform operations on GPU
c = a + b
print(c)
tensor([5, 7, 9], device='cuda:0')

But I'm unable to run the code below:

import tensorflow as tf

physical_devices = tf.config.list_physical_devices('GPU')
print("Num GPUs:", len(physical_devices))

I'm getting this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[12], line 1
----> 1 physical_devices = tf.config.list_physical_devices('GPU')
      2 print("Num GPUs:", len(physical_devices))

AttributeError: module 'tensorflow' has no attribute 'config'

Even this doesn't work

import tensorflow as tf
print("Num of GPUs available: ", len(tf.test.gpu_device_name()))
--------------------------------------------------------------------------- AttributeError                            Traceback (most recent call
> last) Cell In[13], line 2
>       1 import tensorflow as tf
> ----> 2 print("Num of GPUs available: ", len(tf.test.gpu_device_name()))
> 
> AttributeError: module 'tensorflow' has no attribute 'test'

Solution

  • This is unlikely to do with CUDA and more likely to do with a bad version of tensorflow installation. Get some basics going first:

    import tensorflow as tf
    print(tf.__version__)
    

    make sure that matches what you see with

    $ pip show tensorflow