Search code examples
indexingsyntaxpytorchtensor

Indexing a tensor with None in PyTorch


I've seen this syntax to index a tensor in PyTorch, not sure what it means:

v = torch.div(t, n[:, None])

where v, t, and n are tensors.

What is the role of "None" here? I can't seem to find it in the documentation.


Solution

  • Similar to NumPy you can insert a singleton dimension ("unsqueeze" a dimension) by indexing this dimension with None. In turn n[:, None] will have the effect of inserting a new dimension on dim=1. This is equivalent to n.unsqueeze(dim=1):

    >>> n = torch.rand(3, 100, 100)
    
    >>> n[:, None].shape
    (3, 1, 100, 100)
    
    >>> n.unsqueeze(1).shape
    (3, 1, 100, 100)
    

    Here are some other types of None indexings.

    In the example above : is was used as a placeholder to designate the first dimension dim=0. If you want to insert a dimension on dim=2, you can add a second : as n[:, :, None].

    You can also place None with respect to the last dimension instead. To do so you can use the ellipsis syntax ...:

    • n[..., None] will insert a dimension last, i.e. n.unsqueeze(dim=-1).

    • n[..., None, :] on the before last dimension, i.e. n.unsqueeze(dim=-2).