Search code examples
python-3.xcompiler-errorstensorflow2.xtensorflow-xla

XLA in TF2 IteratorGetNext: unsupported op error


I am trying to simply run a .pb tensorflow 2 model with XLA. However, I get the following error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n\007\n\003CPU\020\001\n\007\n\003GPU\020\0002\002J\0008\001\202\001\000", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
    Stacktrace:
        Node: __inference_predict_function_3130, function: 
        Node: IteratorGetNext, function: __inference_predict_function_3130
 [Op:__inference_predict_function_3130]

The error occurs independent of the model and also when I directly apply a model after I trained it. I think, I am doing something fundamentally wrong or XLA is not properly supported for TF2. The same code without TF XLA running. Does anyone have any idea how to fix this issue?

I am working in an Ubuntu 18.04 with python 3.8 in anaconda and TF 2.4.1 My code:

import tensorflow as tf
import numpy as np
import h5py
import sys

model_path_compile= 'model_Input/pbFolder'
data_inference_mat ='model_Input/data_inference/XXXX.MAT'

with h5py.File(data_inference_mat, 'r') as dataset:
    try:
        image_set = dataset['polar'][()].astype(np.uint16).T
        image = np.cast[np.float32](image_set)
        image /= 16384
    except KeyError:
        print('-----------------------ERROR--------------')
x = np.expand_dims(image, axis=0)
model_compile = tf.keras.models.load_model(model_path_compile)
with tf.device("device:XLA_CPU:0"):
    y_pred = model_compile.predict(x)`

The full error:

    2021-07-19 16:09:02.521211: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-07-19 16:09:02.521416: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-19 16:09:02.522638: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-07-19 16:09:03.357078: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-07-19 16:09:03.378059: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2400000000 Hz
Traceback (most recent call last):
  File "/media/ric/DATA/Software_Workspaces/MasterThesisWS/AI_HW_deploy/XLA/Tf2ToXLA_v2/TF2_RunModel.py", line 24, in <module>
    y_pred = model_compile.predict(x)
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1629, in predict
    tmp_batch_outputs = self.predict_function(iterator)
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
    result = self._call(*args, **kwds)
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 894, in _call
    return self._concrete_stateful_fn._call_flat(
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
    outputs = execute.execute(
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n\007\n\003CPU\020\001\n\007\n\003GPU\020\0002\002J\0008\001\202\001\000", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
    Stacktrace:
        Node: __inference_predict_function_3130, function: 
        Node: IteratorGetNext, function: __inference_predict_function_3130
 [Op:__inference_predict_function_3130]

Solution

  • After some days of work and all kinds of approaches, I finally found a workaround for my purposes.

    As I only want the LLVM IR of one execution of the model, I can use an alternative function of TensorFlow, model.predict_step. It only runs once and thus does not utilise the IteratorGetNext method avoiding the initial error.