torch::stack
accepts a c10::TensorList
and works perfectly fine when tensors of the same shape is given. However, when you try to send the output of a previously torch::stack
ed Tensor, it fails and gives memory access violation.
To be more concrete, let's assume we have 3 Tensors of shape 4 like:
torch::Tensor x1 = torch::randn({4});
torch::Tensor x2 = torch::randn({4});
torch::Tensor x3 = torch::randn({4});
torch::Tensor y = torch::randn({4});
The first round of stacking is trivial:
torch::Tensor stacked_xs = torch::stack({x1,x2,x3});
However, trying to do :
torch::Tensor stacked_result = torch::stack({y, stacked_xs});
will fail.
I'm looking to get the same behavior as in np.vstack
in Python where this is permitted and works.
How should I be going about this?
You can add a dimension to y
with torch::unsqueeze
. Then concatenation with cat
(not stack
, so different from numpy but the result will be what you ask for) :
torch::Tensor x1 = torch::randn({4});
torch::Tensor x2 = torch::randn({4});
torch::Tensor x3 = torch::randn({4});
torch::Tensor y = torch::randn({4});
torch::Tensor stacked_xs = torch::stack({x1,x2,x3});
torch::Tensor stacked_result = torch::cat({y.unsqueeze(0), stacked_xs});
It is also possible to flatten your first stack then reshape it, up to your preference :
torch::Tensor stacked_xs = torch::stack({x1,x2,x3});
torch::Tensor stacked_result = torch::cat({y, stacked_xs.view({-1}}).view({4,4});