Search code examples
pythonpytorchtensor

How to perform torch.meshgrid over multiple tensors in parallel?


Let's say we have a tensor x of size [60,9] and a tensor y of size [60,9] Is it possible to do an operation like xx,yy = torch.meshgrid(x,y) such that xx and yy is of size [60,9,9] and xx[i,:,:], yy[i,:,:] is basically torch.meshgrid(x[i],y[i])?

The built-in torch.meshgrid operation only accepts 1d tensors, is it possible to do the above operation without using for loops (which is inefficient as it does not take use of GPU's parallel operation)?


Solution

  • I don't believe you will gain anything since the initialization of the tensors is not done on the GPU. So a proposed approach would indeed be to loop over x and y or using map as an iterable:

    grids = map(torch.meshgrid, zip(x,y))