Search code examples
dockertensorflowdeep-learningtensorflow-serving

Facing Problem in Tensorflow serving with docker


I am working with Tensorflow serving with docker. I used the code mentioned in the official document of TensorFlow as below(used Windows PowerShell)

docker pull tensorflow/serving
git clone https://github.com/tensorflow/serving
Set-Variable -Name "TESTDATA" -Value "$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving

After running the above code am getting this

2020-05-08 04:50:41.577978: I tensorflow_serving/model_servers/server.cc:86] Building single TensorFlow model file config:  model_name: half_plus_two model_base_path: /models/half_plus_two
2020-05-08 04:50:41.581575: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2020-05-08 04:50:41.581678: I tensorflow_serving/model_servers/server_core.cc:573]  (Re-)adding model: half_plus_two
2020-05-08 04:50:41.780628: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: half_plus_two version: 123}
2020-05-08 04:50:41.780738: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123}
2020-05-08 04:50:41.780778: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123}
2020-05-08 04:50:41.781020: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/half_plus_two/00000123
2020-05-08 04:50:41.793200: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-05-08 04:50:41.793300: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:264] Reading SavedModel debug info (if present) from: /models/half_plus_two/00000123
2020-05-08 04:50:41.797324: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-08 04:50:41.844706: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:203] Restoring SavedModel bundle.
2020-05-08 04:50:41.881278: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:152] Running initialization op on SavedModel bundle at path: /models/half_plus_two/00000123
2020-05-08 04:50:41.887881: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:333] SavedModel load for tags { serve }; Status: success: OK. Took 106866 microseconds.
2020-05-08 04:50:41.889403: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:105] No warmup data file found at /models/half_plus_two/00000123/assets.extra/tf_serving_warmup_requests
2020-05-08 04:50:41.895569: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: half_plus_two version: 123}
2020-05-08 04:50:41.901866: I tensorflow_serving/model_servers/server.cc:358] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
[evhttp_server.cc : 238] NET_LOG: Entering the event loop ...
2020-05-08 04:50:41.907795: I tensorflow_serving/model_servers/server.cc:378] Exporting HTTP/REST API at:localhost:8501 ...

waiting for an hour to run next command. what should I do guys? Any idea Please help


Solution

  • I believe, now your server is running fine. Just open a new window and you can make your http requests to it. I referred the documentation and what you are doing is right and its the desired nature of the logs.

    tensorflow procedure

    Just follow the next steps in the documentation:

    # Query the model using the predict API
    curl -d '{"instances": [1.0, 2.0, 5.0]}' \
    -X POST http://localhost:8501/v1/models/half_plus_two:predict
    # Returns => { "predictions": [2.5, 3.0, 4.5] }