Search code examples
pythonnumpyopencvonnx

Onnx model input size different from Opencv frame size


How can I convert an OpenCV frame into the right size for my ONNX model to accept it? Currently, my ONNX model input shape is [32, 3, 256, 224] but when I resize with OPENCV and print the img shape it's (224,256, 3).


Solution

  • Seperate the channels, transpose image dimensions and put the channels together. You can use this one line code:

    np.array([np.transpose(img[:, :, 0]), np.transpose(img[:, :, 1]), np.transpose(img[:, :, 2])])
    

    Example:

    a = np.zeros((224,256, 3))
    b = np.array([np.transpose(a[:, :, 0]), np.transpose(a[:, :, 1]), np.transpose(a[:, :, 2])])
    b.shape #returns (3, 256, 224)