I have got trained keras model and converted it using mmdnn. Then I try use it in c++ code:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <torch.h>
int main()
{
cv::Mat image;
image= cv::imread("test_img.png", cv::IMREAD_GRAYSCALE); // Read the file
try
{
torch::jit::script::Module module;
module = torch::jit::load("my_model.pth");
torch::IntArrayRef input_dim = std::vector<int64_t>({ 1, 2, 256, 256});
cv::Mat input_img;
image.convertTo(input_img, CV_32FC3, 1 / 255.0);
torch::Tensor x = torch::from_blob(input_img.data, { 1, 2, 256, 256 }, torch::kFloat);
torch::NoGradGuard no_grad;
auto output = module.forward({ x });
float* data = static_cast<float*>(output.toTensor().data_ptr());
cv::Mat output_img = cv::Mat(256, 256, CV_32FC3, data);
cv::imwrite("output_img.png", output_img);
}
catch (std::exception &ex)
{
std::cout << "exception! " << ex.what() << std::endl;
}
return 0;
}
This code throws an exception:
exception! isTensor() INTERNAL ASSERT FAILED at E:\20B\pytorch\pytorch\aten\src\ATen/core/ivalue_inl.h:112, please report a bug to PyTorch. Expected Tensor but got Tuple (toTensor at E:\20B\pytorch\pytorch\aten\src\ATen/core/ivalue_inl.h:112) (no backtrace available)
This was thrown in line float* data = static_cast<float*>(output.toTensor().data_ptr());
when the function toTensor()
was called. If I use toTuple()
instead of toTensor()
then the result doesn't have the function data_ptr()
, but I need this for extracting data (and putting it into opencv image).
How to extract image from the model output?
In this case the answer of model is tuple of 2 images. We can extract them by such way:
torch::Tensor t0 = output.toTuple()->elements()[0].toTensor();
torch::Tensor t1 = output.toTuple()->elements()[1].toTensor();
Variables t0
and t1
contain tensors with output of model.