Search code examples
pythonpytorchconv-neural-networktensor

Batch of n-dimensional vectors to batch of images with n channels


I have a batch of n-dimensional vectors, i.e. a tensor of size[batch_size, n]. I want this to be transformed into a batch of images of size [batch_size, n, H, W], i.e. each element of each vector in the batch must become a [1, W, H] image, thus each vector becomes a [n, H, W] image.

Now I'm doing it in a very ugly way:

vectors = torch.zeros((batch_size, n))

# This is the (batch_size, n, H, W) tensor that I will fill
channels = torch.empty((batch_size, n, H, W))

for i, vector in enumerate(vectors):
    for j, val in enumerate(vector):
        channels[i, j].fill_(val)

How can I do it properly, using pytorch functions?


Solution

  • You can add dimensions to the orginal tensor with vectors[:,:, None, None], then multiply by a (H, W) tensor of ones:

    channels = vectors[:,:, None, None]*torch.ones((H, W))
    

    This will give you a tensor of size (batch_size, n, H, W), with each channels[i][j] being a (H, W) map with constant values.