When I try to display/print some tensors to the screen, I face something like the following where instead of getting the final result, it seems libtorch displays the tensor with a multiplier (i.e. 0.01*
and the likes as you can see below) :
offsets.shape: [1, 4, 46, 85]
probs.shape: [46, 85]
offsets: (1,1,.,.) =
0.01 *
0.1006 1.2322
-2.9587 -2.2280
(1,2,.,.) =
0.01 *
1.3772 1.3971
-1.2813 -0.8563
(1,3,.,.) =
0.01 *
6.2367 9.2561
3.5719 5.4744
(1,4,.,.) =
0.2901 0.2963
0.2618 0.2771
[ CPUFloatType{1,4,2,2} ]
probs: 0.0001 *
1.4593 1.0351
6.6782 4.9104
[ CPUFloatType{2,2} ]
How can I disable this behavior and get the final output? I tried to explicitly convert this into float hoping this will lead to the finalized output to be stored/displayed but that doesn't work either.
Basing on libtorch's source code for outputting the tensors, after searching for " *" string within the repository, it turns out that this "pretty-print" is done in aten/src/ATen/core/Formatting.cpp translation unit. The scale and asterisk is prepended here:
static void printScale(std::ostream & stream, double scale) {
FormatGuard guard(stream);
stream << defaultfloat << scale << " *" << std::endl;
}
And later on all coordinates of the Tensor are divided by the scale
:
if(scale != 1) {
printScale(stream, scale);
}
double* tensor_p = tensor.data_ptr<double>();
for(int64_t i = 0; i < tensor.size(0); i++) {
stream << std::setw(sz) << tensor_p[i]/scale << std::endl;
}
Basing on this translation unit, this is not configurable at all.
I guess you've got two options here:
#ifdef
) the <<
operator overload for Tensor in Formatting.cpp and provide your own implementation. When building libtorch, however, you'd have to link it to your target containing the method's implementation.Both options, however, require your to change 3rd party code, which is quite bad, I believe.