I need to create a fixed length Tensor
in pyTorch that acts like a FIFO queue.
I have this fuction to do it:
def push_to_tensor(tensor, x):
tensor[:-1] = tensor[1:]
tensor[-1] = x
return tensor
For example, I have:
tensor = Tensor([1,2,3,4])
>> tensor([ 1., 2., 3., 4.])
then using the function will give:
push_to_tensor(tensor, 5)
>> tensor([ 2., 3., 4., 5.])
However, I was wondering:
I implemented another FIFO queue:
def push_to_tensor_alternative(tensor, x):
return torch.cat((tensor[1:], Tensor([x])))
The functionality is the same, but then I checked their performance in speed:
# Small Tensor
tensor = Tensor([1,2,3,4])
%timeit push_to_tensor(tensor, 5)
>> 30.9 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit push_to_tensor_alternative(tensor, 5)
>> 22.1 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
# Larger Tensor
tensor = torch.arange(10000)
%timeit push_to_tensor(tensor, 5)
>> 57.7 µs ± 4.88 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit push_to_tensor_alternative(tensor, 5)
>> 28.9 µs ± 570 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Seems like this push_to_tensor_alternative
which uses torch.cat
(instead of shifting all items to the left) is faster.