I have a python Databricks notebook(pyspark) which does an aggregation based on the inputs provided to the notebook via parameters.
Thank you.
Yes, it's possible to do that by using Databricks Jobs REST API. There are two ways of starting a job with notebook:
I personally would prefer 1st variant as it hides the things like cluster configuration, etc. from the Azure function, as job specification is done on Databricks.
In both cases, the result of REST API call is the job run ID, that then could be used to check the status of the job run, and to retrieve the output of the job.