I try to inference my 1D CNN model with OpenVINO API 2.0.
My input is a CSV file with several signal records, so I use model.reshape
to set a dynamic input size.
import openvino.runtime as ov
import numpy as np
core = ov.Core()
signal = np.genfromtxt('A4C_LV_V.csv')
model = core.read_model(model="saved_model.xml")
model.reshape([1, signal.size])
compiled_model = core.compile_model(model, "CPU")
infer_request = compiled_model.create_infer_request()
input_tensor = ov.Tensor(array=signal, shared_memory=True)
infer_request.set_input_tensor(input_tensor)
infer_request.start_async()
infer_request.wait()
output = infer_request.get_output_tensor()
output_buffer = output.data
But I encounter the error below.
RuntimeError: [ PARAMETER_MISMATCH ] Failed to set input blob with precision: FP64, if CNNNetwork input blob precision is: FP32
If I comment line 11: infer_request.set_input_tensor(input_tensor)
.
The error will be solved and inference successfully.
I have no sense in deleting set_input_tensor
still can work normally.
And here comes my inference files.
(The ZIP file contains 3 IR files and 1 input file.)
It seems that your model's precision is incorrect. Please ensure that your model had been properly converted into the right precision.
There are 3 common supported model precision for OpenVINO: FP32, FP16, and INT8
You may refer here and here for further info.
If you would like to feed the model an input that has a different size than the model input shape, OpenVINO does provide capabilities to change the model input shape during the runtime. Use Reshape method to change the input shape of the model with a single input. Refer here for further details.
There are models that support input shape changes before compilation. For this use case, this is how you can configure the model.