Search code examples
python-3.xvectorpytorchstacking-context

what is the difference between torch.stack([t1,t1,t1],dim=1) and torch.hstack([t1,t1,t1])?


Technically, both the methods torch.stack([t1,t1,t1],dim=1) and torch.hstack([t1,t1,t1]) performs the same operation i.e they both horizontally stack the vectors. But when I performed both on a same vector but they yield 2 different outputs can someone explain why ?

Taken tensor t1 :

# Code : 
t1 = torch.arange(1.,10.)
t1,t1.shape
# Output : 
(tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.]), torch.Size([9]))

Using torch.stack([t1,t1,t1],dim=1)

# Code :
t1_stack = torch.stack([t1,t1,t1],dim=1)
#  dim value lies between [-2,1]
# -2 and 0 stack vertically
# -1 and 1 stack Horizontally
t1_stack,t1_stack.shape
# Output :
(tensor([[5., 5., 5.],
         [2., 2., 2.],
         [3., 3., 3.],
         [4., 4., 4.],
         [5., 5., 5.],
         [6., 6., 6.],
         [7., 7., 7.],
         [8., 8., 8.],
         [9., 9., 9.]]),
 torch.Size([9, 3]))

Using torch.hstack([t1,t1,t1])

# Code : 
h_stack = torch.hstack([t1,t1,t1])
h_stack,h_stack.shape
# Output : 
(tensor([5., 2., 3., 4., 5., 6., 7., 8., 9., 5., 2., 3., 4., 5., 6., 7., 8., 9.,
         5., 2., 3., 4., 5., 6., 7., 8., 9.]),
 torch.Size([27]))

It gives 2 different outputs for same vector while using different methods for horizontal stacking


Solution

  • Looking at the pytorch doc:

    • hstack: "This is equivalent to concatenation along the first axis for 1-D tensors".
    • stack: "Concatenates a sequence of tensors along a new dimension."

    So apparently this is not exactly the same operation in your case (1-D tensor).