Search code examples
pythonazure-machine-learning-service

How to pass arguments to scoring file when deploying a Model in AzureML


I am deploying a trained model to an ACI endpoint on Azure Machine Learning, using the Python SDK. I have created my score.py file, but I would like that file to be called with an argument being passed (just like with a training file) that I can interpret using argparse. However, I don't seem to find how I can pass arguments This is the code I have to create the InferenceConfig environment and which obviously does not work. Should I fall back on using the extra Docker file steps or so?

from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig

env = Environment('my_hosted_environment')
env.python.conda_dependencies = CondaDependencies.create(
    conda_packages=['scikit-learn'],
    pip_packages=['azureml-defaults'])
scoring_script = 'score.py --model_name ' + model_name
inference_config = InferenceConfig(entry_script=scoring_script, environment=env)

Adding the score.py for reference on how I'd love to use the arguments in that script:

#removed imports
import argparse

def init():
    global model

    parser = argparse.ArgumentParser(description="Load sklearn model")
    parser.add_argument('--model_name', dest="model_name", required=True)
    args, _ = parser.parse_known_args()

    model_path = Model.get_model_path(model_name=args.model_name)
    model = joblib.load(model_path)

def run(raw_data):
    try:
        data = json.loads(raw_data)['data']
        data = np.array(data)
        result = model.predict(data)
        return result.tolist()

    except Exception as e:
        result = str(e)
        return result

Interested to hear your thoughts


Solution

  • How to deploy using environments can be found here model-register-and-deploy.ipynb . InferenceConfig class accepts source_directory and entry_script parameters, where source_directory is a path to the folder that contains all files(score.py and any other additional files) to create the image.

    This multi-model-register-and-deploy.ipynb has code snippets on how to create InferenceConfig with source_directory and entry_script.

    from azureml.core.webservice import Webservice
    from azureml.core.model import InferenceConfig
    from azureml.core.environment import Environment
    
    myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
    inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
    
    service = Model.deploy(workspace=ws,
                           name='sklearn-mnist-svc',
                           models=[model], 
                           inference_config=inference_config,
                           deployment_config=aciconfig)
    
    service.wait_for_deployment(show_output=True)
    
    print(service.scoring_uri)