Search code examples
amazon-sagemakeramazon-sagemaker-studio

How to install additional packages in sagemaker pipeline


I want to add dependency packages in my sagemaker pipeline which will be used in Preprocess step.

I have tried to add it in required_packages in setup.py file but it's not working. I think setup.py file is no use of at all.

required_packages = ["sagemaker==2.93.0", "matplotlib"]

Preprocessing steps:

sklearn_processor = SKLearnProcessor(
        framework_version="0.23-1",
        instance_type=processing_instance_type,
        instance_count=processing_instance_count,
        base_job_name=f"{base_job_prefix}/job-name",
        sagemaker_session=pipeline_session,
        role=role,
    )
    step_args = sklearn_processor.run(
        outputs=[
            ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
            ProcessingOutput(output_name="validation", source="/opt/ml/processing/validation"),
            ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
        ],
        code=os.path.join(BASE_DIR, "preprocess.py"),
        arguments=["--input-data", input_data],
    )
    step_process = ProcessingStep(
        name="PreprocessSidData",
        step_args=step_args,
    )

Pipeline definition:

pipeline = Pipeline(
        name=pipeline_name,
        parameters=[
            processing_instance_type,
            processing_instance_count,
            training_instance_type,
            model_approval_status,
            input_data,
        ],
        steps=[step_process],
        sagemaker_session=pipeline_session,
    )

Solution

  • For each job within the pipeline you should have separate requirements (so you install only the stuff you need in each step and have full control over it).

    To do this, you need to use the source_dir parameter:

    source_dir (str or PipelineVariable) – Path (absolute, relative or an S3 URI) to a directory with any other training source code dependencies aside from the entry point file (default: None). If source_dir is an S3 URI, it must point to a tar.gz file. Structure within this directory are preserved when training on Amazon SageMaker.

    Look at the documentation in general for Processing (you have to use FrameworkProcessor).

    Within the specified folder, there must be the script (in your case preprocess.py), any other files/modules that may be needed, and the requirements.txt file.

    The structure of the folder then will be:

    BASE_DIR/
    |- requirements.txt
    |- preprocess.py
    

    It is the common requirements file, nothing different. And it will be used automatically at the start of the instance, without any instruction needed.


    So, your code becomes:

    from sagemaker.processing import FrameworkProcessor
    from sagemaker.sklearn import SKLearn
    from sagemaker.workflow.steps import ProcessingStep
    from sagemaker.processing import ProcessingInput, ProcessingOutput
    
    
    sklearn_processor = FrameworkProcessor(
        estimator_cls=SKLearn,
        framework_version='0.23-1',
        instance_type=processing_instance_type,
        instance_count=processing_instance_count,
        base_job_name=f"{base_job_prefix}/job-name",
        sagemaker_session=pipeline_session,
        role=role
    )
    
    step_args = sklearn_processor.run(
        outputs=[
            ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
            ProcessingOutput(output_name="validation", source="/opt/ml/processing/validation"),
            ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
        ],
        code="preprocess.py",
        source_dir=BASE_DIR,
        arguments=["--input-data", input_data],
    )
    
    step_process = ProcessingStep(
        name="PreprocessSidData",
        step_args=step_args
    )
    

    Note that I changed both the code parameter and the source_dir. It's a good practice to keep the folders for the various steps separate so you have a requirements.txt for each and don't create overlaps.