I see that many inplace Tensor operations like mul_
and div_
are const in PyTorch C++ frontend:
Tensor &mul_(Scalar other) const
It seems to be strange since inplace operations are supposed to modify the tensor data, right? Does anyone know what is the rationale behind making them const?
I've found some discussions on github, but it looks like the title contradicts what is written below:
'const Tensor' doesn't provide const safety ... Therefore, these methods should be non-const
As underlined by the comment and by this thread, this const
is disingenuous because it applies to the pointer to the underlying TensorImpl
, not to the data itself. This is just for compiling optimization and carry no real semantics here. This is similar to the difference between const int*
(pointer to const int) and int* const
(const pointer to int).
Const (resp. non-const) functions in torch are easily recognized with the absence (resp. presence) of a final underscore in the function name.