Currently, I am trying to import an ONNX model
to Keras in order to run training on datasets of grayscale images of size 1x1x200x200.
However, when I convert my onnx model to Keras using
onnx-to-keras()
the model's input layer is changed to ?x1x200x200 as a .h5 model
.
And when converted back to a .onnx file, the input layer has changed to Nx1x200x200.
This works when trying to train the model, but the changed input layer causes an error when deploying the trained Neural Network to C Plus Plus code using ONNX Runtime.
This is because the N
gets read as a -1 dimensional layer that causes an overflow.
The C++ code works with the original model where the input layer is expected to receive a 1x1x200x200 image.
I have already tried to change the tensor inputs using reshape() on the numpy array tensors, but this had no effect on the altered model.
Just wondering if this is fixable, and any help would be appreciated. Thanks!
Answering my own question,
Converters From ONNX to Keras are not 1-1 currently. So in order to achieve completeness, it seems that keras converters alter ONNX models upon input to receive arbitrary input values (N dimensions). To fix this, I simply had to train the network, edit the input and output layers, and then re-export the model to get the C Plus Plus code to work.
k_model.summary()
k_model._layers.pop(-1)
k_model._layers.pop(-1)
k_model.summary()
newInput = Input(batch_shape=(1,1,200,200))
newOutputs = k_model(newInput)
newModel = Model(newInput,newOutputs)
newModel.summary()
I am currently trying to figure out whether or not this would keep the weights of the original model. But so far, it seems like there are existing weights in the "new model", which is a good sign.