How to configure artifactory, specifically its associated xml, using an Azure pipeline?

I have a question for the Azure DevOps wizards.

I want to deploy a virtual machine using a pipeline. This I have done. I am using a .yml file that calls on a bicep template file in order to provision the linux virtual machine and configure its relevant extensions based on the Microsoft bicep templates.

One of these is a custom extension calling a local .ps1 script, in my file structure (not in a storage account) which I am using to install antifactory on the VM. This I have also achieved. But... I am stuck on trying to get the xml configuration files onto this artifactory VM.

I want to configure the binarystore.xml at $ARTIFACTORY_INSTALL_DIR/var/etc/artifactory to utilise the Azure Storage account configuration.

You can find the template here: https://jfrog.com/help/r/jfrog-installation-setup-documentation/azure-blob-storage-v2-binary-provider

This isn't important but lends some context to my problem.

I want to "inject" variables from a local parameter.json file in my codebase into this binarystore.xml file, in order to configure it with my azure storage account details. But there is some added complexity in that I do not want to put my access key in these codebase parameters - ideally I should inject this into the xml via secrets in the pipeline. Following the completion of this I need to ensure that this xml is in the correct location in the VM, so that I can run the usual sudo docker start artifactory, and have it pick-up the configuration.

Essentially, I am quite new to this and I'm really unsure how to piece this all together. I have had a quick look at transformation tokens but I'm not quite sure if this is quite what I need. I can't find examples of a similar end to end process so it's very possible I am barking up the wrong tree, and a different approach is more favourable.

Thank you in advance for your time and advice on this matter. Really appreciate any and all help! Cheers


  • Suppose you set the parameter.json looks like below.

        "storageAccount": {
            "username": "xxxx",
            "password": "xxxx"

    And set the binarystore.xml looks like below.


    Ensure the tow files have been checked out together with the source files to working directory on the agent machine.

    You can try as the steps below in your pipeline job:

    1. Add a script task (such as Bash task, PowerShell task) to do the following things.

      • Use the command (such as jq) to read and parse the parameter.json file to get the azure storage account details. For example.

        username=$(jq -r '.storageAccount.username' path/to/parameter.json)
        password=$(jq -r '.storageAccount.password' path/to/parameter.json)
      • Use the logging command 'SetVariable' to set each value of the azure storage account details as a secret variable. For example.

        echo "##vso[task.setvariable variable=SA_username;issecret=true]$username"
        echo "##vso[task.setvariable variable=SA_password;issecret=true]$password"
    2. Then add another script task to do the following things.

      • Map the secret variables as environment variables.
      • Use the 'sed' command to replace '{storageAccount_username}' and '{storageAccount_password}' with the values of secret environment variables in the binarystore.xml file.
      - bash: |
         sed -i 's/{storageAccount_username}/$SA_USERNAME/g' path/to/binarystore.xml
         sed -i 's/{storageAccount_password}/$SA_PASSWORD/g' path/to/binarystore.xml
        displayName: 'Pass Values to XML'
          SA_USERNAME: $(SA_username)
          SA_PASSWORD: $(SA_password)
    3. If the Azure VM has been successfully deployed, you can use the Azure file copy task to copy the updated binarystore.xml file from the local (agent machine) into the specified directory on the Azure VM.