According to GCP AI Platform's documentation here, Custom Prediction Routine deployment should allow to include PyPI dependencies. I included my dependency of jsonschema like below in my setup.py
script:
from setuptools import setup
from setuptools import find_packages
REQUIRED_PACKAGES = ['jsonschema']
setup(
name='custom_code',
version='1.0.2',
scripts=['predictor.py', 'preprocess.py'],
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True
)
but got this error message when deploying:
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'str' object has no attribute 'decode' (Error code: 0)"
The same error persisted when specifying a version like so REQUIRED_PACKAGES = ['jsonschema==3.2.0']
. I then used a lower version:
from setuptools import setup
from setuptools import find_packages
REQUIRED_PACKAGES = ['jsonschema==3.0.0']
setup(
name='custom_code',
version='1.0.2',
scripts=['predictor.py', 'preprocess.py'],
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True
)
but now got this error:
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in predictor - DistributionNotFound: The 'jsonschema' distribution was not found and is required by the application (Error code: 0)"
What can go wrong here?
It turns out the initial error Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'str' object has no attribute 'decode' (Error code: 0)"
was actually driven by a model format issue. This seems to be a known issue with TensorFlow Keras (although my TF version is 1.15, the quoted TF version was 2.1.0). Once I used the TensorFlow SavedModel format, the error immediately went away, and I was also able to include jsonchema dependency with the setup.py
file as is.