Search code examples
pythonpytorchtensor

I have used detach().clone().cpu().numpy() but still raise TypeError: can't convert cuda:0 device type tensor to numpy


bug occur at this function line 7

def visualize_embedding(h, color, epoch=None, loss=None):
    plt.figure(figsize=(7,7))
    plt.xticks([])
    plt.yticks([])
    h = h.detach().clone().cpu().numpy()
    print(type(h))
    plt.scatter(h[:, 0], h[:, 1], s=140, c=color, cmap="Set2")
    if epoch is not None and loss is not None:
        plt.xlabel(f'Epoch: {epoch}, Loss: {loss.item():.4f}', fontsize=16)
    plt.show()

error:

<class 'numpy.ndarray'>
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[17], line 21
     19 loss, h = train(data)
     20 if epoch % 10 == 0:
---> 21     visualize_embedding(h, color=data.y, epoch=epoch, loss=loss)
     22     time.sleep(0.3)

Cell In[16], line 16
     14 h = h.detach().clone().cpu().numpy()
     15 print(type(h))
---> 16 plt.scatter(h[:, 0], h[:, 1], s=140, c=color, cmap="Set2")
     17 if epoch is not None and loss is not None:
     18     plt.xlabel(f'Epoch: {epoch}, Loss: {loss.item():.4f}', fontsize=16)

File c:\Users\polyu\Documents\RA\hkjc_dm\hkjc_dm\model\src\venvModel4\lib\site-packages\matplotlib\pyplot.py:3684, in scatter(x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, edgecolors, plotnonfinite, data, **kwargs)
   3665 @_copy_docstring_and_deprecators(Axes.scatter)
   3666 def scatter(
   3667     x: float | ArrayLike,
   (...)
   3682     **kwargs,
   3683 ) -> PathCollection:
-> 3684     __ret = gca().scatter(
   3685         x,
   3686         y,
...
   1030     return self.numpy()
   1031 else:
-> 1032     return self.numpy().astype(dtype, copy=False)

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

h is already ndarray, why it still gives me the convert cuda tensor error? By the way h is the representation of shape [batch_size, 2]


Solution

  • I would guess the error is raised by something other than h, perhaps the color! Check whether data.y is a GPU tensor, in that case you can give it the same treatment as h by calling detach/cpu/numpy on it.