Search code examples
amazon-web-servicesaws-lambdahuggingface-transformersamazon-efshuggingface

Key Error when importing Hugging Face model into AWS lambda function


I'm trying to launch a lambda function that uses a Hugging Face model (BioGPT) using the transformers paradigm on an AWS lambda function. The infrastructure looks like this:

enter image description here

It more or less follows the setup outlined in this post, except that I am trying to use the BioGPT model instead of the models outlined in the link above.

Here is my app.py:

"""
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
"""

import os
from pathlib import Path
from aws_cdk import (
    aws_lambda as lambda_,
    aws_efs as efs,
    aws_ec2 as ec2
)
from aws_cdk import App, Stack, Duration, RemovalPolicy, Tags

from constructs import Construct

class ServerlessHuggingFaceStack(Stack):
    def __init__(self, scope: Construct, id: str, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # EFS needs to be setup in a VPC
        vpc = ec2.Vpc(self, 'Vpc', max_azs=2)

        # creates a file system in EFS to store cache models
        fs = efs.FileSystem(self, 'FileSystem',
                            vpc=vpc,
                            removal_policy=RemovalPolicy.DESTROY)
        access_point = fs.add_access_point(
            'MLAccessPoint',
            create_acl=efs.Acl(
                owner_gid='1001',
                owner_uid='1001',
                permissions='750'
            ),
            path="/export/models",
            posix_user=efs.PosixUser(gid="1001", uid="1001")
        )

        # %%
        # iterates through the Python files in the docker directory
        docker_folder = os.path.dirname(os.path.realpath(__file__)) + "/inference"
        pathlist = Path(docker_folder).rglob('*.py')
        for path in pathlist:
            base = os.path.basename(path)
            filename = os.path.splitext(base)[0]
            # Lambda Function from docker image
            lambda_.DockerImageFunction(
                self, filename,
                code=lambda_.DockerImageCode.from_image_asset(docker_folder,
                                                              cmd=[
                                                                  filename+".handler"]
                                                              ),
                memory_size=8096,
                timeout=Duration.seconds(600),
                vpc=vpc,
                filesystem=lambda_.FileSystem.from_efs_access_point(access_point, '/mnt/hf_models_cache'),
                environment={"TRANSFORMERS_CACHE": "/mnt/hf_models_cache"},
            )

app = App()

stack = ServerlessHuggingFaceStack(app, "BioGptStack")
Tags.of(stack).add("project", "biogpt")

app.synth()

And here is my Dockerfile:

ARG FUNCTION_DIR="/function/"

FROM huggingface/transformers-pytorch-cpu as build-image


# Include global arg in this stage of the build
ARG FUNCTION_DIR

# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
  apt-get install -y \
  g++ \
  make \
  cmake \
  unzip \
  libcurl4-openssl-dev


# Create function directory
RUN mkdir -p ${FUNCTION_DIR}

# Copy handler function
COPY *.py ${FUNCTION_DIR}

# Install the function's dependencies
RUN pip uninstall --yes jupyter
RUN pip install --target ${FUNCTION_DIR} awslambdaric
RUN pip install --target ${FUNCTION_DIR} sentencepiece protobuf

FROM huggingface/transformers-pytorch-cpu

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}

# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}

ENTRYPOINT [ "python3", "-m", "awslambdaric" ]

# This will get replaced by the proper handler by the CDK script
CMD [ "sentiment.handler" ]

Here is the error message I am seeing when I try to test my lambda function:

line 672, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/usr/local/lib/python3.6/dist-packages/transformers/models/auto/configuration_auto.py", line 387, in __getitem__
raise KeyError(key)
KeyError: 'biogpt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code

Solution

  • My best guess here on the issue is that I am using an older Docker image (huggingface/transformers-pytorch-cpu). If you look on Docker, you'll see this image hasn't been updated in over a year, so I'm going to save the model to my local machine...

    model.save_pretrained("path/to/model")
    

    ...Then push this to EFS, so my lambda can access it from a mounted directory.

    Hope that works...