I would like to develop a TensorFlow probability regression model locally and deploy as Sagemaker endpoint. I have deployed standard XGB models like this previously and understand that one can deploy TensorFlow model like so:
from sagemaker.tensorflow.model import TensorFlowModel
tensorflow_model = TensorFlowModel(
name=tensorflow_model_name,
source_dir='code',
entry_point='inference.py',
model_data=<TENSORFLOW_MODEL_S3_URI>,
role=role,
framework_version='<TENSORFLOW_VERSION>')
tensorflow_model.deploy(endpoint_name=<ENDPOINT_NAME>,
initial_instance_count=1,
instance_type='ml.m5.4xlarge',
wait=False)
However, I do not think this will cover for example the dependency:
import tensorflow_probability as tfp
Do I need to use script mode or Docker instead? Any pointer would be very much appreciated. Thanks.
Another way would be to add "tensorflow_probability" to "requirements.txt" and both your code and requirements into dependencies if you are not using source_dir:
Model(entry_point='inference.py',
dependencies=['code', 'requirements.txt'])