Search code examples
amazon-sagemaker

Why is Sagemaker trying to upload model again


I am facing issue with my Sagemaker model deployment. I have my model data already available in S3 and I supply that to model but it fails with access issue while uploading to s3 again:

Error:

S3UploadFailedError: Failed to upload /tmp/tmpoy3pcol_/temp-model.tar.gz to xyz_bucket/text_trove/model.tar.gz: An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

Source:

sess = sagemaker.Session(
    default_bucket = "xyz_bucket"
)
bucket = sess.default_bucket()

pretrained_model_data = sess.upload_data(
    path="model-tiny.tar.gz", bucket=bucket, key_prefix="abc/pqr/test/summarizer"
)

model=PyTorchModel(py_version='py3',
    name="text_trove",
    image_uri="texttrove/tt_sagemaker_py3.8_cuda11.0_torch_2.0:latest",
    entry_point="inference.py",
    source_dir=".",
    role=role,
    model_data=pretrained_model_data, sagemaker_session= sess 
)

predictor = model.deploy(
    initial_instance_count=1,
    instance_type=instance_type,
    serializer=JSONSerializer(),
    deserializer=JSONDeserializer(),
    sagemaker_session=sess,  
    tags=[{"Key":"app_group","Value":"xyzzy_tag"}]
)

I don't access to root level of the bucket that's why it fails. I really appreciate if someone can help me understand:

  1. why is it trying to upload model again when it's already available in s3?
  2. How do I change S3 prefix for model.deploy(....) so it won't upload to root or upload to the nested accessible path?

Solution

  • Since you are passing an entry_point, the SageMaker SDK will download the model.tar.gz from S3, re-package the tar.gz to include this script and upload it back to S3.

    If you already have the model.tar.gz in S3 and the inference script is included in the tar.gz, then you don't need to set entry_point. Instead you can just set model_data to the S3 location where your model.tar.gz is.