Search code examples
object-detectiontensorflow-serving

How can I serve the Faster RCNN with Resnet 101 model with tensorflow serving


I am trying to serve the Faster RCNN with Resnet 101 model with tensorflow serving.

I know I need to use tf.saved_model.builder.SavedModelBuilder to export the model definition as well as variables, then I need a script like inception_client.py provided by tensorflow_serving.

while I am going through the examples and documentation and experimenting, I think someone may have done the same thing. So plase help if you have done the same or know how to get it done. Thanks in advance.


Solution

  • Tensorflow Object Detection API has its own exporter script that is more sophisticated than the outdated examples found under Tensorflow Serving.

    While building Tensorflow Serving, make sure you pull the latest master commit of tensorflow/tensorflow (>r1.2) and tensorflow/models

    Build Tensorflow Serving for GPU

    bazel build -c opt --config=cuda tensorflow_serving/...

    If you face errors regarding crosstool and nccl, follow the solutions at https://github.com/tensorflow/serving/issues/186#issuecomment-251152755 https://github.com/tensorflow/serving/issues/327#issuecomment-305771708

    Usage

    python tf_models/object_detection/export_inference_graph.py \ --pipeline_config_path=/path/to/ssd_inception_v2.config \ --trained_checkpoint_prefix=/path/to/trained/checkpoint/model.ckpt \ --output_directory /path/to/output/1 \ --export_as_saved_model \ --input_type=image_tensor

    Note that during export all variables are converted into constants and baked into the protobuf binary. Don't be panicked if you don't find any files under saved_model/variables directory

    To start the server,

    bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception_v2 --model_base_path=/path/to/output --enable_batching=true

    As for the client, the examples under Tensorflow Serving work well