I am trying to restore a TensorFlow's Saver object (.ckpt.*) and convert it into SavedModel object(.pb) so that I can deploy it with TensorFlow Serving.
This is how I convert:
with tf.Session() as sess:
# Restore the graph from (.meta .data .index)
saver = tf.train.import_meta_graph(f"{checkpoint_path}/{meta_file_string}")
saver.restore(sess, tf.train.latest_checkpoint(str(checkpoint_path)))
# Convert into ".pb" using SavedModel API.
model_path = f'{savedmodel_path}/1'
builder = tf.saved_model.builder.SavedModelBuilder(model_path)
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.SERVING],
main_op=tf.tables_initializer(),
strip_default_attrs=True)
builder.save()
print("Saved")
Saving seems to work fine when I tree:
$ tree 1
1
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
1 directory, 3 files
and when I use saved_model_cli:
$ saved_model_cli show --dir path/to/model/1
The given SavedModel contains the following tag-sets:
serve
However, when I run the TensorFlow serving docker container,
$ docker run \
-p 8500:8500 \
-v path/to/model:/models/aaa \
--env MODEL_NAME=aaa \
--name aaa \
tensorflow/serving
it complains that it cannot find the tag "serve" which I DID ADD:
2019-11-19 02:35:30.844163: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/aaa/1
2019-11-19 02:35:30.916952: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-11-19 02:35:30.927640: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: fail. Took 83527 microseconds.
2019-11-19 02:35:30.927781: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: aaa version: 1} failed: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
What have I done wrong, how can I fix this? otherwise, how can I dive into this issue deeper?
I am using tensorflow 1.14.0. and using docker image tensorFlow-serving:1.14.0-devel.
Replacing tensorflow/serving
image with :latest
version, which is :2.0.0
so far. And worked fine.
My local train environment still uses TensorFlow 1.14 No idea why this is so.