Using the example at https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers
I created a model with my own data. I want to save it in Tensorflow lite format. I am saving as SavedModel, but while converting, I encountered many error codes. The last error code I encountered;
WARNING:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
INFO:tensorflow:Assets written to: /tmp/test_saved_model/assets
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
212 model body, the input/output will be quantized as well.
--> 213 inference_type: Data type for the activations. The default value is int8.
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
4 frames
Exception: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
During handling of the above exception, another exception occurred:
ConverterError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
215 whether to add NumericVerify ops into the debug mode quantized model.
--> 216
217 Returns:
218 Quantized model in serialized form (e.g. a TFLITE model) with floating-point
ConverterError: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
code;
# Save the model into temp directory
export_dir = "/tmp/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/tmp/save_model_tflite.tflite')
tflite_model_file.write_bytes(tflite_model)
What is the cause of this error code? My goal is to embed this model with react native in the app. Thank you.
Looking at your trace, seems like you have some HashTable ops. You need to set converter.allow_custom_ops = True
in order to convert this model.
export_dir = "/content/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.allow_custom_ops = True
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/content/save_model_tflite.tflite')
tflite_model_files.write_bytes(tflite_model)