Search code examples
opencvpytorchonnx

pytorch model predicts fixed label when it exports to onnx


I trained resnet-18 model in pytorch. And it works well in pytorch.
But, when I converts it to onnx and predicts in cv2, model predicts only 1~2 label(it should predict 0~17 labels).
this is my model export code

    model.eval()
    x = torch.randn(1, 3, 512, 384, requires_grad=True)

    # export model
    torch.onnx.export(model, x, "model.onnx", export_params=True, opset_version=10, do_constant_folding=True, input_names = ['input'], output_names = ['output'])

And this is my code for inference in cv2

self.transform = albumentations.Compose([
        albumentations.Resize(512, 384, cv2.INTER_LINEAR),
        albumentations.GaussianBlur(3, sigma_limit=(0.1, 2)),
        albumentations.Normalize(mean=(0.5), std=(0.2)),
        albumentations.ToFloat(max_value=255)
        ])
...
#image crop code: works fine in pytorch
image = frame[ymin:ymax, xmin:xmax]  #type(frame)==numpy.array, RGB form
augmented = self.transform(image=image)
image = augmented["image"]
...
#inference code: does not work well
net=cv2.dnn.readNet("Model.onnx")
blob = cv2.dnn.blobFromImage(image, swapRB=False, crop=False)
net.setInput(blob)
label = np.array(net.forward())
text  =  'Label: '+str(np.argmax(label[0]))

All transform settings works well in pytorch.
What can be the problem in this code?


Solution

  • The problem with your code probably has to do with preprocessing the images differently: self.transform rescales the image, but when you are reading blob, you are not doing that. To verify this, you can read the same image and check if the image and blob are equal (e.g. using torch.allclose), when the actual (random) augmentations are disabled.