Search code examples
python-3.xtensorflowkerastensorflow2.0tensorflow-serving

TF Keras Model Serving REST API JSON Input Format


So I tried following this guide and deploy the model using docker tensorflow serving image. Let's say there are 4 features: feat1, feat2, feat3 and feat4. I tried to hit the prediction endpoint {url}/predict with this JSON body:

{
"instances": 
[
    {
        "feat1": 26,
        "feat2": 16, 
        "feat3": 20.2, 
        "feat4": 48.8
    }
]}

I got 400 response code:

{
"error": "Failed to process element: 0 key: feat1 of 'instances' list. Error: Invalid argument: JSON object: does not have named input: feat"
}

This is the signature passed to model.save():

signatures = {
      'serving_default':
          _get_serve_tf_examples_fn(model,
                                    tf_transform_output).get_concrete_function(
                                        tf.TensorSpec(
                                            shape=[None],
                                            dtype=tf.string,
                                            name='examples')),
  }

I understand that from this signature that in every instances element, the only field being accepted is "examples" but when I tried to only pass this one only with empty string:

{
    "instances": 
    [
        {
            "examples": ""
        }
    ]
}

I also got bad request: {"error": "Name: <unknown>, Feature: feat1 (data type: int64) is required but could not be found.\n\t [[{{node ParseExample/ParseExampleV2}}]]"}

I couldn't find in the guide how to build the JSON body request the right way, it would be really helpful if anyone can point this out or give references regarding this matter.


Solution

  • I tried to solve this problem by changing the signature serving input but it raised another exception. This problem already solved, check it out here.