Search code examples
pythonpython-3.xdeep-learningpytorchpytorch-lightning

Replacing the original pixel values with the predicted values by model: it returns True when comparing two tennsors?


I have an original input value tensor patches to my model

patches.shape
torch.Size([64, 1280, 10])  # batch: 64, number of patches: 1280, original pixel values of each patch: 10

My model predicts the pixel values of specific patches that are masked and the indices of masked patches are saved in masked_indices with the shape of torch.Size([64, 896]) means 896 patches out of 1280 are predicted by the model.

I wanna replace the original pixels values of those 896 patches with the new predicted values by the model (pred_pixel_values, shape: torch.Size([64, 896, 10])). I did the following

# for indexing purposes
batch_range = torch.arange(batch, device=device)[:, None]

pa = patches
pa[batch_range, masked_indices, :]= pred_pixel_values
pa.shape
torch.Size([64, 1280, 10])

I wanted to compare if the values are replaced or not:

torch.equal(pa,patches)

but it returns True. Where I am doing wrong?


Solution

  • Use deepcopy here:

    from copy import deepcopy
    pa = deepcopy(patches)
    

    As mentioned by jasonharper, when you do pa = patches, they basically refer to the same tensor with two different names or in other words pa is an alias of patches. As a result, whatever changes you make to either of them, automatically applies to the other one.