Search code examples
pythontensorflowkerastensorflow-servingtensorflow2.x

Tensorflow saved model does not contain input names


We are currently training an object detection model in tensorflow 2.4.0 which is working fine. However, to be able to serve it we need to wrap it with an image pre-processing layer that takes the image bytes as input and converts them to the image tensor required by the detection model. See the following code:

png_file = 'myfile.png'
input_tensor = tf.io.read_file(png_file, name='image_bytes')

def preprocessing_layer(inputs):

    image_tensor = tf.image.decode_image(inputs, channels=3)
    image_tensor = tf.expand_dims(
        image_tensor, axis=0, name=None
    )
    return image_tensor 

model = keras.Sequential(
    [
        keras.Input(tensor=input_tensor, dtype=tf.dtypes.string, name='image_bytes', batch_size=1),
        tf.keras.layers.Lambda(lambda inp: preprocessing_layer(inp)),
        yolo_model
    ]
)
model.summary()

This wrapped model provides useful detection and if we call model.input_names the correct names are returned: ['image_bytes'].

Now if we save the model using model.save('model_path') the saved model does not contain the input names anymore and replaces them with generic ones (args_0).

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['args_0'] tensor_info:
        dtype: DT_STRING
        shape: ()
        name: serving_default_args_0:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['model'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 64512, 6)

This is a problem because tensorflow serving relies on the name ending with _bytes to convert base64 input.

Would you please provide hints on how to retain the input names when saving the model?


Solution

  • The problem stems from the way you defined your lambda layer, and the way you setup your model.

    Your lambda function should be able to treat a batch, which is currently not the case. You can naively use tf.map_fn to make it handle a batch of images, like so:

    def preprocessing_layer(str_inputs):
        def decode(inputs):
            image_tensor = tf.image.decode_image(inputs[0], channels=3)
            image_tensor = tf.expand_dims(
                image_tensor, axis=0, name=None
            )
            return image_tensor
        return tf.map_fn(decode, str_inputs, fn_output_signature=tf.uint8)
    

    Then you can define your model using a symbolic tf.keras.Input, setting the shape to () (to specify no dimension other that the batch size) :

    model = keras.Sequential(
        [
            keras.Input((), dtype=tf.dtypes.string, name='image_bytes'),
            tf.keras.layers.Lambda(lambda inp: preprocessing_layer(inp)),
            yolo_model
        ]
    )
    

    Now the model is correctly created, and the signature can be correctly exported.