i am trying to create key vault backed secret scope using AZ Cli through azure release pipeline.
current setup:
only Azure devops service connection has keyvault contributor access on keyvault only and keyvault is in differnt subscription from databricks cluster. Also we cannot have any other provided as keyvaut contributor(limitation) and addedd service connection object id in databricks admin group. below is the commands using to create
pip install databricks-cli
ACCESS_TOKEN=$(az account get-access-token --resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d --query accessToken --output tsv)
export DATABRICKS_HOST=https://xxxxx123445.12.azuredatabricks.net
export DATABRICKS_TOKEN=$ACCESS_TOKEN
databricks secrets create-scope --scope 'nexussecret' --scope-backend-type 'AZURE_KEYVAULT'
--resource-id '/subscriptions/xxxxxxxxxxbe5f-xxxxx/resourceGroups/rg-tst-we-test-
keyvault/providers/Microsoft.KeyVault/vaults/eu-test--1xy' --dns-name 'https://eu-test-
-1xy.vault.azure.net/'
I am getting below error
2024-09-20T12:48:08.6535388Z Error: Authorization failed. Your token may be expired or lack the valid scope
2024-09-20T12:48:08.6927237Z
Could you please guide me what i doing wrong. thanks
Based on your description, I could reproduce the issue with the script below in a local Bash terminal. It even failed with databricks secrets list-scopes
command.
pip install databricks-cli
# unique resource ID for the Azure Databricks service, which is 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d
TenantId="20247162-xxxxxx"
ApplicationId="1d538fc0-xxxxxx"
ClientSecret="HxxxxxxxHC"
az login --service-principal --username $ApplicationId --password $ClientSecret --tenant $TenantId
ACCESS_TOKEN=$(az account get-access-token --resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d --query accessToken --output tsv)
echo $ACCESS_TOKEN
# Use API to get AAD token for Databricks
# ACCESS_TOKEN=$(curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
# https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token \
# -d "client_id=$ApplicationId" \
# -d "grant_type=client_credentials" \
# -d "scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default" \
# -d "client_secret=$ClientSecret" | jq -r '.access_token'
# )
# echo $ACCESS_TOKEN
export DATABRICKS_HOST=https://adb-xxxxxx.0.azuredatabricks.net/
export DATABRICKS_TOKEN=$ACCESS_TOKEN
# export DATABRICKS_CONFIG_FILE=".databrickscfg"
# echo "[DEFAULT]" > .databrickscfg
# echo "host = $DATABRICKS_HOST" >> .databrickscfg
# echo "token = $ACCESS_TOKEN" >> .databrickscfg
databricks secrets list-scopes
# databricks secrets create-scope --scope 'nexussecret1'
MSYS_NO_PATHCONV=1 databricks secrets create-scope --scope 'nexussecret1' --scope-backend-type 'AZURE_KEYVAULT' --resource-id '/subscriptions/0450d2e4-xxxxxx/resourceGroups/rg-azkv/providers/Microsoft.KeyVault/vaults/xxxxxxtestkv' --dns-name 'https://xxxxxxtestkv.vault.azure.net/'
# databricks secrets delete-scope --scope 'nexussecret1'
Please double check in your Databricks -> Settings -> Identity and access -> Service principals to see if the underlying app registration referenced by the Azure Resource Manager service connection of your release pipeline was added with sufficient permissions. If not, Add service principle that is existing in your Micrsoft Entra ID with its application ID.
Then rerun the script to see if you can list and create new secret scopes.
Additionally, it is always suggested you make sure the commands work with your expected identity in your local environment before integrating them in pipelines or any other automation tool, to make you scenario less complicated.