Search code examples
tensorflowcudapytorchtorch

Assign Torch and Tensorflow models two separate GPUs


I am comparing two pre-trained models, one is in Tensorflow and one is in Pytorch, on a machine that has multiple GPUs. Each model fits on one GPU. They are both loaded in the same Python script. How can I assign one GPU to the Tensorflow model and another GPU to the Pytorch model?

Setting CUDA_VISIBLE_DEVICES=0,1 only tells both models that these GPUs are available - how can I (within Python I guess), make sure that Tensorflow takes GPU 0 and Pytorch takes GPU 1?


Solution

  • You can refer to torch.device. https://pytorch.org/docs/stable/tensor_attributes.html?highlight=device#torch.torch.device

    In particular do

    device=torch.device("gpu:0")
    tensor = tensor.to(device)
    

    or to load a pretrained model

    device=torch.device("gpu:0")
    model = model.to(device)
    

    to put tensor/model on gpu 0.

    Similarly tensorflow has tf.device. https://www.tensorflow.org/api_docs/python/tf/device. Its usage is described here https://www.tensorflow.org/guide/using_gpu

    for tensorflow to load model on gpu:0 do,

    with tf.device("gpu:0"):
         load_model_function(model_path)