Search code examples
azurepowerbimlflowazure-machine-learning-service

PowerBI and MLflow integration (through AzureML)


I'm currently trying to integrate an ML model currently deployed as a webservice on AzureML with PowerBI.

I see that it can be integrated but the model requires the addition of a schema file when it is being deployed as a webservice. Without this, the model can't be viewed in PowerBI.

The problem that I have come up against is that I use MLflow to log ML model performances and subsequently to deploy a selected model onto AzureML as a webservice using MLflow's AzureML integration - mlflow.azureml.deploy(). This unfortunately doesn't have the option to define a schema file before the model is deployed, thus resulting in no model being available in PowerBI as it lacks the required schema file.

My options seem to be:

  1. Find a workaround, possibly using the working REST api of the model in a power query.
  2. Rewrite the deployment code and handle the webservice deployment steps in Azure instead of MLflow.

I thought I would ask to see if I am maybe missing something as I can't find a workaround using my current code to define a schema file in MLflow when deploying with mlflow.azureml.deploy().


Solution

  • Point number 2 is the way we solved this issue. Instead of using MLflow to deploy to a scoring service on Azure, we wrote a custom code which load MLflow model when container is initialised.

    Scoring code is something like this:

    import os
    import json
    from mlflow.pyfunc import load_model
    
    from inference_schema.schema_decorators import input_schema, output_schema
    from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
    
    def init():
        global model
        model = load_model(os.path.join(os.environ.get("AZUREML_MODEL_DIR"), "awesome_model"))
    
    @input_schema('data', NumpyParameterType(input_sample))
    @output_schema(NumpyParameterType(output_sample))
    
    def run(data):
        return model.predict(data)