Search code examples
pytorchtensorlibtorch

Pinned memory in LibTorch


I might be missing something really fundamental but I couldn't find any explanation in the documentation or online

I'm trying to copy a GPU at::Tensor to a pinned tensor on the CPU, but once I copy it, the CPU tensor is no longer pinned. I assume it just creates a new copy of the GPU tensor and assigns it, but if that's the case, how do you copy to a pre-allocated pinned memory?

My testing code:

    at::Tensor gpu = at::randn({1025,1025}, device(at::kCUDA));
    at::Tensor pinned = at::empty(gpu.sizes(), device(at::kCPU).pinned_memory(true));
    std::cout << "Is Pinned: " << std::boolalpha << pinned.is_pinned() << std::endl;
    pinned = gpu.to(at::kCPU);
    std::cout << "Is Pinned: " << std::boolalpha << pinned.is_pinned() << std::endl;

The output is

Is Pinned: true
Is Pinned: false

This also happens with torch:: instead of at::

Tested on Ubuntu 16.04 using LibTorch 1.5.0 compiled from sources


Solution

  • I found a way, and that's using the copy_ function

    ...
    //pinned = gpu.to(torch::kCPU, true);
    gpu.copy_(pinned);
    std::cout << "Is Pinned: " << std::boolalpha << pinned.is_pinned() << std::endl;
    

    This outputs

    Is Pinned: true
    Is Pinned: true
    

    I guess it makes sense since the to function returns a tensor rather than manipulating. Though I would expect some variant of to to allow it.

    Oh well, it works this way.