Search code examples
pytorchtensorelementwise-operationsbatchsize

element wise multiplication of 2D tensors as layer of neural network in pytorch


I have a 3D torch tensor with dimension of [Batch_size, n, n] which is the out put of a layer of my network and a constant 2D torch tensor with size of [n, n]. How can I perform element wise multiplication over the batch size which should resulted in a torch tensor with size of [Batch_size, n, n]?

I know it is possible to implement this operation using explicit loop but I am interested in the most efficient way.


Solution

  • One option is that you can expand your weight matrix to have a matching batch dimension (without using any additional memory). E.g. twoDTensor.expand((batch_size, n, n)) returns the same underlying data, but representing a 3D tensor. You can see that the stride for the batch dim is zero.