Search code examples
tensorflowtensorflow-serving

Tensorflow Serving: How to re train a model that is currently served in production using tensorflow serving?


How to re train a model with new data that is currently served in production using tensorflow serving? Do we have to train the model manually and serve it again? Or is there any automated way of doing this.

I am using tensorflow serving with docker.

Basically the idea is that: Considering there is already a model served using tensorflow serving, and in the future I get some bunch of additional data and I want the model to be fitted with this data then, how can we do this training to the same model?


Solution

  • Question 1: I do have a script to train the model, but does the training have to be done locally/manually?

    Answer: As far as i understand you are talking it should be done locally or in some remote server, you can do wherever as per convenience the main important step for tensorflow serving is to save model in the respective format that could be used by the server. Please refer to the link on how to save as well as how to load it in the serving docker container. serving tensorflow model

    Question 2: Suppose I create a entirely new model (apart from modelA currently server), how can I load it to tensorflow serving again? Do I have to manually load it to the docker target path?

    Answer: Yes if you are loading it without using serving config, you will have to manually shut down container, remap the path in the command and then load it in the docker container. That is where the serving config helps you to load models in runtime only.

    Question 3: TFX document says to update the model.config file for adding new models, but how can I update it when the serving is running.

    Answer: A basic configuration file would look like this

      config {
        name: 'my_first_model'
        base_path: '/tmp/my_first_model/'
        model_platform: 'tensorflow'
      }
      config {
        name: 'my_second_model'
        base_path: '/tmp/my_second_model/'
        model_platform: 'tensorflow'
      }
    }
    

    This file would be needed to be mapped before starting your docker container and of course the path as well where different models will be located. This config file when changed will load new models accordingly in the serving docker container. You can also maintain different versions of the same model as well. For more info please refer to this link serving config. This file is automatically looked up by the serving periodically and as soon as it detects some change it will automatically load new models without the need to restart the docker container.