I want to flatten any general n-dimensional torch.Tensor
but in a way which is computationally optimized. (By "flatten" here, I mean converting a given Tensor to a one-dimensional Tensor which has the same number of elements as the given vector.) I am using the following steps currently to do so:
local original_tensor = -- output of some intermediate layer of a conv-net residing in the GPU
local shaping_tensor = torch.Tensor(original_tensor:nElement())
original_tensor = original_tensor:resizeAs(shaping_tensor:cuda())
I believe it is slightly inefficient because of :cuda()
which pushes this new Tensor from memory to the GPU. Can someone please suggest a more efficient way to do this?
Thanks in advance.
Typical approach is to create a view (thus not actually reshaping the tensor).
x:view(x:nElement())
which comes directly from official "torch for numpy users" https://github.com/torch/torch7/wiki/Torch-for-Numpy-users