Search code examples
pytorch

Difference between torch.as_tensor() and torch.asarray()


What I understand from the docs is that both torch.as_tensor() and torch.asarray() return the tensor that shares the memory with the input data, and return a copy otherwise. I noticed only two differencies in parameteres:

  • I can implicitly pass copy=False into torch.asarray() to require shared memory and get the exeption if the copy is not possible, or I can pass copy=True to require the copy.
  • I can specify requires_grad in torch.asarray().

So does torch.asarray() just offer more capabilities than torch.as_tensor()?

But if I just want to get the shared memory if possible, what should I use: torch.asarray() or torch.as_tensor()? Is there any difference in performance or something?


Solution

  • “So does torch.asarray() just offer more capabilities than torch.as_tensor()?”

    Yes that's basically it.

    torch.as_tensor automatically tries to copy data and autograd information, while torch.asarray gives you more explicit control over data copying and autograd information.

    If you want shared memory/autograd by default, I would just use as_tensor. To my knowledge there is no performance difference between the two provided the same memory/autograd sharing parameters are used.